Related
I'm trying to load a model into my project and I get an exception at glDrawElements.
I read the model file (.nfg), and retain the vertices and indices into vectors, and I use Vertex Buffer Object to bind my model.
I tried this:
I modified the fourth parameter from (GLvoid*)(sizeof(Vector3) * x)
to (GLvoid*)(offset(Vertex, attribute)), but didn't do anything (in the link, the problem was that he was sending memory address in the 4th parameter, and I thought maybe I was sending the wrong parameter to the wrong attribute, which still, would be a problem when actually showing the model).
I'm using OpenGL ES 2.0 and I'm not doing this project for neither Android or iOS; currently working in Visual Studio 2013 on Windows 8.1
The model loader:
void loadModelNfg(const std::string &filename,
GLuint &vbo, GLuint &ibo, GLuint &num, Shaders shaders){
// put here the verteces and indices from the file
std::vector<Vertex> vertices;
std::vector<GLushort> indices;
_loadModelNfg(filename, vertices, indices);
std::cout << "Mesh Loader: loaded file: " << filename << "\n";
// creates OpenGL objects necessary for drawing
GLuint gl_vertex_buffer_object, gl_index_buffer_object;
// vertex buffer object -> object in which to keep the vertices
glGenBuffers(1, &gl_vertex_buffer_object);
glBindBuffer(GL_ARRAY_BUFFER, gl_vertex_buffer_object);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex),
&vertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
// index buffer object -> object in which to keep the indices
glGenBuffers(1, &gl_index_buffer_object);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, gl_index_buffer_object);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(GLushort),
&indices[0], GL_STATIC_DRAW);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
vbo = gl_vertex_buffer_object;
ibo = gl_index_buffer_object;
num = indices.size();
}
Calling the previous function:
// for now, global variables:
GLuint vbo, ibo, num;
Shader myShaders;
int Init ( ESContext* esContext ) {
glClearColor ( 0.0f, 0.0f, 0.0f, 0.0f );
// this one works: tried with a triangle
int ret = myShaders.Init("../Resources/Shaders/TriangleShaderVS.vs",
"../Resources/Shaders/TriangleShaderFS.fs");
if (ret == 0)
loadModelNfg("../../ResourcesPacket/Models/Bila.nfg", vbo, ibo, num, myShaders);
return ret;
}
Drawing the model:
void Draw(ESContext* esContext) {
Matrix world;
world.SetIdentity();
Matrix view = c.getView();
Matrix persp = c.getPerspective();
Matrix trans = world * view *persp;
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(myShaders.program);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
if (myShaders.positionAttribute != -1) {
glEnableVertexAttribArray(myShaders.positionAttribute);
glVertexAttribPointer(myShaders.positionAttribute, 3, GL_FLOAT,
GL_FALSE, sizeof(Vertex), (GLvoid*)(offsetof(Vertex, pos)));
}
if (myShaders.normalAttribute != -1) {
glEnableVertexAttribArray(myShaders.normalAttribute);
glVertexAttribPointer(myShaders.normalAttribute, 3, GL_FLOAT,
GL_FALSE, sizeof(Vertex), (GLvoid*)(offsetof(Vertex, norm)));
}
if (myShaders.binormalAttribute != -1) {
glEnableVertexAttribArray(myShaders.binormalAttribute);
glVertexAttribPointer(myShaders.binormalAttribute, 3, GL_FLOAT,
GL_FALSE, sizeof(Vertex), (GLvoid*)(offsetof(Vertex, binorm)));
}
if (myShaders.tangentAttribute != -1) {
glEnableVertexAttribArray(myShaders.tangentAttribute);
glVertexAttribPointer(myShaders.tangentAttribute, 3, GL_FLOAT,
GL_FALSE, sizeof(Vertex), (GLvoid*)(offsetof(Vertex, tgt)));
}
if (myShaders.texcoordAttribute != -1) {
glEnableVertexAttribArray(myShaders.texcoordAttribute);
glVertexAttribPointer(myShaders.texcoordAttribute, 2, GL_FLOAT,
GL_FALSE, sizeof(Vertex), (GLvoid*)(offsetof(Vertex, uv)));
}
if (myShaders.colorAttribute != -1) {
glEnableVertexAttribArray(myShaders.colorAttribute);
glVertexAttribPointer(myShaders.colorAttribute, 3, GL_FLOAT,
GL_FALSE, sizeof(Vertex), (GLvoid*)(offsetof(Vertex, col)));
}
if (myShaders.MVPuniform != -1) {
glUniformMatrix4fv(myShaders.MVPuniform, 1, GL_FALSE, (GLfloat*) trans.m);
}
// HERE GETS EXCEPTION
glDrawElements(GL_TRIANGLES, num, GL_UNSIGNED_SHORT, (GLvoid*) 0);
eglSwapBuffers (esContext->eglDisplay, esContext->eglSurface);
}
I am not sure that I am correctly binding the buffers in the loadModelNfg() function.
From what can this problem come and how can it be resolved?
EDIT:
GL_VENDOR: Imagination Technologies (Host GL: 'Intel');
GL_RENDERER: PowerVR PVRVFrame 4.2SGX 530 (Host 'Intel(R) HD Graphics 400');
GL_VERSION: OpenGL ES 2.0 (SDK build: 2.04.24.0809)
EDIT:
I surrounded the function with try-catch statement, but it still breaks when calling it:
try {
glDrawElements(GL_TRIANGLES, num, GL_UNSIGNED_SHORT, (GLvoid*)0);
}
catch (const std::exception& e) {
std::cout << e.what() << "\n";
}
I forgot to mention that the project/solution builds successful (after cleaning, or by rebuild).
After learning that OpenGL doesn't throw exceptions, I started looking how it handles errors. I found out that it "returns" error codes (or 0 if success), which can be found with glGetError().
Going withglGetError() through the code, I found out that the error was caused by glUseProgram(myShaders.program);.
Knowing that, I went through the functions which used myShaders variable, and I found that, after calling loadModelNfg("../../ResourcesPacket/Models/Bila.nfg", vbo, ibo, num, myShaders);, the variable got change.
Remembering that I don't use it anymore, I just removed it, and everything was fine.
What is strange is that I didn't modified the myShaders variable anywhere in that function (the code in the question is the final one). The problem, I think, is that I didn't declared the parameter const Shaders shaders.
So, the conclusion:
use glGetError() and breakpoints in code to find the real problem. It may not be the where it breaks!
PS: I hope it's ok that I put this as an answer. If it's not, I'll update the question.
I'm attempting to use VAO's+VBO's+IBO's with shaders, but no object gets drawn. I am not sure what I am missing. I am pretty new to C++, and GLSL, so I am not sure if I am screwing something up with the C++ in general, or if I am failing to handle the OpenGL context correctly?
The main function (code for handling window creation is missing. If you think you may need to review it as well, just let me know.):
int main(int argc, char *argv[])
{
//INIT SDL
SDL_Init(SDL_INIT_VIDEO);
SDL_CreateWindowAndRenderer(400, 300, SDL_WINDOW_OPENGL, &displayWindow, &displayRenderer);
SDL_GetRendererInfo(displayRenderer, &displayRendererInfo);
/*TODO: Check that we have OpenGL */
if ((displayRendererInfo.flags & SDL_RENDERER_ACCELERATED) == 0 || (displayRendererInfo.flags & SDL_RENDERER_TARGETTEXTURE) == 0) {}
SDL_GL_CreateContext(displayWindow);
//SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
glewInit();
int error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error during glewInit call: " << error << "\n"; };
//glEnable(GL_DEBUG_OUTPUT);
Display_InitGL();
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error during Display init: " << error << "\n"; };
Display_SetViewport(400, 300);
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error during Display Set Viewport Issue: " << error << "\n"; };
// SET UP TEST OBJ
MainChar *player = new MainChar();
player->MainChar_VBO_Func();
GLushort size = player->MainChar_VBO_IndexBuffer_Func();
float count = 0.0;
// END SET UP OF TEST OBJ
GLint *length = new GLint;
const char* vertShdr = readFile("C:\\Users\\JRFerrell\\Documents\\Visual Studio 2013\\Projects\\GLEW Practice\\vertShader.vs", *length);
std::cout << vertShdr;
GLuint vertShaderId = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertShaderId, 1, &vertShdr, length);
std::cout << "\n\nLength: " << *length;
glCompileShader(vertShaderId);
GLint *length2 = new GLint;
const char* fragShdr = readFile("C:\\Users\\JRFerrell\\Documents\\Visual Studio 2013\\Projects\\GLEW Practice\\fragShader.fs", *length2);
GLint fragShaderId = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragShaderId, 1, &fragShdr, length2);
glCompileShader(fragShaderId);
GLuint shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertShaderId);
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error during glAttachShader: " << error << "\n"; };
glAttachShader(shaderProgram, fragShaderId);
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error during glAttachShader: " << error << "\n"; };
glBindAttribLocation(shaderProgram, 0, "in_Position");
glBindAttribLocation(shaderProgram, 1, "in_Normal");
glLinkProgram(shaderProgram);
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error during glLinkProgram: " << error << "\n"; };
// END SHADER PROGRAM DEFINITION
//Check info log for errors:
int Len = 0;
char *Buffer = nullptr;
glGetShaderiv(vertShaderId, GL_INFO_LOG_LENGTH, &Len);
Buffer = new char[Len];
glGetShaderInfoLog(vertShaderId, Len, &Len, Buffer);
std::cout << "Vertex Log:" << std::endl << Buffer << std::endl;
delete[] Buffer;
glGetShaderiv(fragShaderId, GL_INFO_LOG_LENGTH, &Len);
Buffer = new char[Len];
glGetShaderInfoLog(fragShaderId, Len, &Len, Buffer);
std::cout << "Fragment Log:" << std::endl << Buffer << std::endl;
delete[] Buffer;
glGetProgramiv(shaderProgram, GL_INFO_LOG_LENGTH, &Len);
Buffer = new char[Len];
glGetProgramInfoLog(shaderProgram, Len, &Len, Buffer);
std::cout << "Shader Log:" << std::endl << Buffer << std::endl;
delete[] Buffer;
// Create VAO. Don't forget to enable all necessary states because the VAO starts with default state, cleaning all states prev called to do so.
GLuint VaoId;
glGenVertexArrays(1, &VaoId);
glBindVertexArray(VaoId);
// Bind buffers & set-up VAO vertex pointers
glBindBuffer(GL_ARRAY_BUFFER, player->vboID);
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error glBindBuffer-vboID: " << error << "\n"; }
glEnableClientState(GL_VERTEX_ARRAY);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * GL_FLOAT, (const GLvoid *)0);
glEnableVertexAttribArray(0);
// Set-up VAO normal pointers
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error glBindBuffer-vbo init: " << error << "\n"; }
glEnableClientState(GL_NORMAL_ARRAY);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * GL_FLOAT, (void*)(3 * sizeof(GL_FLOAT)));
glEnableVertexAttribArray(1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, player->vboIndexID);
GLint maxLength, nAttribs;
glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &nAttribs);
glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &maxLength);
//std::cout << "\nmax length: " << maxLength << "\nnAttribs: " << nAttribs;
glBindVertexArray(0);
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error glBindVertexArray: " << error << "\n"; };
// End VAO init
while (1){
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error glClearColor: " << error << "\n"; };
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
error = glGetError();
if (error != GL_NO_ERROR){ std::cout << "Error in glClear: " << error << "\n"; };
glLoadIdentity();
glUseProgram(shaderProgram);
glBindVertexArray(VaoId);
glDrawElements(GL_TRIANGLES, size, GL_UNSIGNED_SHORT, 0);
glUseProgram(0);
glBindVertexArray(0);
SDL_GL_SwapWindow(displayWindow);
count -= .1;
}
SDL_Delay(5000);
SDL_Quit();
return 0;
}
::The shader code::
Vertex shader:
#version 400
in vec3 in_Position;
in vec3 in_Normal;
void main()
{
gl_Position = vec4(in_Position, 1.0);
}
Fragment shader:
#version 400
out vec4 FragColor;
void main()
{
FragColor = vec4(0.0f, 0.5f, 1.0f, 1.0f);
}
I did look at similar questions on here already, and they did help me fix a few possible issues, but so far, they obviously haven't proven useful in helping me get my code up and running. I also asked some other people in real time chat on gamedev.net, but they couldn't seem to see where I went wrong either. I fixed a possible issue with declaring glDoubles rather than floats, but that was actually working without the vao and shaders, so that is not (and unlikely ever was) the issue, in whole or part.
I don't know if any of the following will solve your problem, but I do see some issues in your code:
glEnableClientState(GL_VERTEX_ARRAY);
You are mixing here old and deprecated builtin vertex atttributes with the generic vertex attributes. You don't need any of these glEnableClientState calls - your shader doesn't use the builtin attributes. The same goes for the glLoadIdentity which is also totally unneeded and would be invalid in a core profile context.
The second thing I see is that you do not specify your attribute indices, so the GL is free to map them. You also don't query them, but just assume them to be 0 for in_Position and 1 for in_Normal - which is by no means guaranteed to be the case. Use the layout(location=) qualifiers when declaring your input attributes in your vertex shader to actually define the mapping, or use glBindAttribLocation.
quickly looking over your code I am struggling to find where you are sending the BufferData to the GPU.
Generate and Bind new buffer
Initialise Buffers to take data.
Send data using glBufferSubData...
Repeat steps 1 through 3 for Element Arrays.
Generate and Bind Vertex Array Object.
Setup VertexAttribArray Pointers and bind them to your shader.
Bind Element Buffer once again.
Unbind Vertex Array using glBindVertexArray(0)
This is how I setup my buffers using OpenTK, the code should be fairly understandable and useful in any case:
// Generate Vertex Buffer Object and bind it so it is current.
GL.GenBuffers(1, out bufferHandle);
GL.BindBuffer(BufferTarget.ArrayBuffer, bufferHandle);
// Initialise storage space for the Vertex Buffer.
GL.BufferData(BufferTarget.ArrayBuffer, bufferSize, IntPtr.Zero, BufferUsageHint.StaticDraw);
// Send Position data.
GL.BufferSubData<Vector3>(
BufferTarget.ArrayBuffer, noOffset, new IntPtr(sizeOfPositionData), bufferObject.PositionData);
// Send Normals data, offset by size of Position data.
GL.BufferSubData<Vector3>(
BufferTarget.ArrayBuffer, new IntPtr(sizeOfPositionData), new IntPtr(sizeOfNormalsData), bufferObject.NormalsData);
// Generate Element Buffer Object and bind it so it is current.
GL.GenBuffers(1, out bufferHandle);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, bufferHandle);
GL.BufferData(
BufferTarget.ElementArrayBuffer, new IntPtr(sizeof(uint) * bufferObject.IndicesData.Length), bufferObject.IndicesData, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, bufferObject.VboID);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, bufferObject.IboID);
// Generate Vertex Array Object and bind it so it is current.
GL.GenVertexArrays(1, out bufferHandle);
GL.BindVertexArray(bufferHandle);
bufferHandle = GL.GetAttribLocation(program, "in_position");
GL.EnableVertexAttribArray(bufferHandle);
GL.BindBuffer(BufferTarget.ArrayBuffer, bufferObject.VboID);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, true, Vector3.SizeInBytes, 0);
GL.BindAttribLocation(program, bufferHandle, "in_position");
bufferHandle = GL.GetAttribLocation(program, "in_normal");
GL.EnableVertexAttribArray(bufferHandle);
GL.BindBuffer(BufferTarget.ArrayBuffer, bufferObject.VboID);
GL.VertexAttribPointer(1, 3, VertexAttribPointerType.Float, true, Vector3.SizeInBytes, sizeOfPositionData);
GL.BindAttribLocation(program, bufferHandle, "in_normal");
GL.BindBuffer(BufferTarget.ElementArrayBuffer, bufferObject.IboID);
// IMPORTANT: vertex array needs unbinding here to avoid rendering incorrectly
GL.BindVertexArray(0);
Well, after sitting down and reading the docs for version 4.0, I learned that I had screwed up on my attrib pointers by passing incorrect stride and pointers to the start of the buffer data. My thought was that the stride was the size of the element type multiplied by the number of attribute elements, so you'd get the next attribute you were looking for. Obviously that is not what you are supposed to do. I changed that to zero since my attribs are back to back:
"glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * GL_FLOAT, (void*)(3 * sizeof(GL_FLOAT)));"
-->
"glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)(3 * sizeof(GL_FLOAT)));"
Then the pointer I tried handling almost the same exact way. Should have been a null pointer to the first buffer attrib location:
"glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)(3 * sizeof(GL_FLOAT)));"
-->
"glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (GLubyte *)NULL);"
Once I actually sat down and read the docs closely, I understood what actually belonged there. Now the shaders are working and I can work on the cool stuff... :P Thanks for the efforts to answer my question anyways everyone. :)
I am writing some proof-of-concept code. I want to prove that I can write data to a buffer object after the buffer has been created. However, I am getting a a GLenum error code of 1280 when I try to unmap the buffer after writing to it. I am completely stymied.
I can initialize the buffer the with color data and successfully render it. The problem is that I cannot modify the data in the buffer afterwards. The code is below. It shows how I write the new data to the buffer and then how I try to read it back. The error codes are shown in comments after the glGetError() calls. The variable "cbo" is the color buffer:
//NEW COLOR DATA
GLubyte colorData2[9] = {255,255,0, 0,128,255, 255,0,255};
//WRITE THE DATA TO THE COLOR BUFFER OBJECT (variable cbo)
glBindBuffer(GL_ARRAY_BUFFER, cbo);
int err1 = glGetError(); //Error code 0
//Oddly, glMapBuffer always returns and invalid pointer.
//GLvoid * pColor = glMapBuffer(GL_ARRAY_BUFFER, GL_MAP_WRITE_BIT);
//However, glMapBufferRange return a pointer that looks good
GLvoid * pColor = glMapBufferRange(GL_ARRAY_BUFFER, 0, 9, GL_MAP_WRITE_BIT);
int err2 = glGetError(); //Error code 0
// Copy colors from host to device
memcpy(pColor, colorData2, 9);
//Unmap to force host to device copy
glUnmapBuffer(cbo);
int err3 = glGetError(); //Error code 1280
//Unbind
glBindBuffer(GL_ARRAY_BUFFER, 0);
int err4 = glGetError(); //Error code 0
//******TEST THE WRITE******
GLubyte readbackData[9];
glBindBuffer(GL_ARRAY_BUFFER, cbo);
int err5 = glGetError(); //Error code 0
GLvoid * pColorX = glMapBufferRange(GL_ARRAY_BUFFER, 0, 9, GL_MAP_READ_BIT);
int err6 = glGetError(); //Error code 1282
//Mem copy halts because of a memory exception.
memcpy(readbackData, pColorX, 9);
glUnmapBuffer(cbo);
glBindBuffer(GL_ARRAY_BUFFER, 0);
Here is the code where I created the buffer object:
//Create color buffer
glGenBuffers(1, &cbo);
glBindBuffer(GL_ARRAY_BUFFER, cbo);
//Create space for three RGB 8-bit color objects
colorBufferSize = 3 * numColorChannels * sizeof(GLubyte);
glBufferData(GL_ARRAY_BUFFER, colorBufferSize, colorData, GL_DYNAMIC_DRAW);
//Unbind
glBindBuffer(GL_ARRAY_BUFFER, 0);
1280, or 0x0500, is GL_INVALID_ENUM.
glUnmapBuffer takes the enum where the buffer object is bound, not the buffer object to unmap. glUnmapBuffer expects the buffer object to be unmapped to be bound to that binding target. So glUnmapBuffer(GL_ARRAY_BUFFER) will unmap whatever is currently bound to the GL_ARRAY_BUFFER binding.
I wrote a simple application using GLUT that I'm porting to SDL now to turn it into a game.
I am having a bizarre issue specifically with using glDrawElements and Vertex Buffer Objects, SDL 1.2.14 OSX. If I don't use VBO's the program runs fine. It only throws a "EXC_BAD_ACCESS" when using VBO's. To make the matter more obscure. The program runs totally fine in GLUT. I must be missing something perhaps in my initialization that is causing this.
Here's the drawing code:
if (glewGetExtension("GL_ARB_vertex_buffer_object"))
{
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Load vertices
glBindBuffer(GL_ARRAY_BUFFER, this->mesh->vbo_vertices);
glVertexPointer(3, GL_FLOAT, 0, BUFFER_OFFSET(0));
//Load normals
glBindBuffer(GL_ARRAY_BUFFER, this->mesh->vbo_normals);
glNormalPointer(GL_FLOAT, 0, BUFFER_OFFSET(0));
//Load UVs
glBindBuffer(GL_ARRAY_BUFFER, this->mesh->vbo_uvs);
glTexCoordPointer(2, GL_FLOAT, 0, BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, this->mesh->vbo_index);
App dies here -----> glDrawElements(GL_TRIANGLES, 3*this->mesh->numFaces, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
} else {
//BTW: If I run this block of code instead of the above, everything renders fine. App doesn't die.
//Drawing with vertex arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, this->mesh->vertexArray);
glNormalPointer(GL_FLOAT, 0, this->mesh->normalsArray);
glTexCoordPointer(2, GL_FLOAT, 0, this->mesh->uvArray);
glDrawElements(GL_TRIANGLES, 3*this->mesh->numFaces, GL_UNSIGNED_INT, this->mesh->indexArray);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Here's the debug information:
Program received signal: “EXC_BAD_ACCESS”.
Thread-1-<com.apple.main-thread>
#0 0x17747a93 in gleRunVertexSubmitImmediate
#1 0x1774772c in gleLLVMArrayFunc
#2 0x177476e4 in gleSetVertexArrayFunc
#3 0x1773073c in gleDrawArraysOrElements_ExecCore
#4 0x176baa7b in glDrawElements_Exec
#5 0x97524050 in glDrawElements
asm gleRunVertexSubmitImmediate
0x17747a93 <+0771> mov (%eax,%ecx,4),%eax <-- the app dies on this.
Here's my SDL initialization code:
//Initialize SDL
if (SDL_Init(SDL_INIT_VIDEO) < 0)
{
cout << "Could not initialize SDL" << endl << SDL_GetError();
exit(2);
}
//Set window
SDL_WM_SetCaption("Hello World!", "Hello World!");
//Set openGL window
if ( SDL_SetVideoMode(width, height, 32, SDL_OPENGL | SDL_RESIZABLE) == NULL ) {
cout << "Unable to create OpenGL context: %s\n" << endl << SDL_GetError();
SDL_Quit();
exit(2);
}
//Set up event handling
SDL_Event event;
bool quit = false;
//Initialize GLEW
GLenum err = glewInit();
if (GLEW_OK != err)
{
//Problem: glewInit failed, something is seriously wrong.
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
exit(1);
}
fprintf(stdout, "Status: Using GLEW %s\n", glewGetString(GLEW_VERSION));
I don't see anything wrong with your code.
When you get EXC_BAD_ACCESS, it is typically because of an attempt to access an object that is not allocated, or that has been deallocated.
You can get more detailed debug information on the object in question by enabling the NSZombieEnabled environment variable. This is a simple blog post on how to enable this environment variable (I am not the author).
This will hopefully help get more information in the debug console on why the crash is happening.
If you don't have vn in your model (obj) file, you should comment out the following lines:
//glEnableClientState(GL_NORMAL_ARRAY);
//glDisableClientState(GL_NORMAL_ARRAY);
That one worked for me.
Please excuse the length (and the width; it's for clarity on an IDE) but I thought of showing the full length of the code since the purpose is a simple Hello World in modern VBO and GLSL.
It was initially based on http://people.freedesktop.org/~idr/OpenGL_tutorials/02-GLSL-hello-world.pdf
The main point is no single error message or warning is printed - and you can see the printfs are a lot (actually, almost all of the code is attempted to be caught for errors).
The compilation is done on -std=c99 -pedantic -O0 -g -Wall (with no warnings) so no much room for compiler error either.
I have pin pointed my attention to
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
and
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
(the latter is the only part of the code I don't fully understand yet; most obscure func 'ever')
The info log does not print anything and it does print a healthy text normally if the shaders are purposely made invalid. Hence it's neither the shaders string assignment or their compilation.
Can you see something that could make it print a blank screen?
It does print a single dot in the middle if glDrawArrays is used with GL_POINTS and it does change color if glClear is preceded with an appropriate glClearColor.
#include "SDL.h" // Window and program management
#include "Glee.h" // OpenGL management; Notice SDL's OpenGL header is not included
#include <stdbool.h> // C99 bool
void initGL(void);
void drawGL(void);
int main (int argc, char **argv) {
// Load the SDL library; Initialize the Video Subsystem
if (SDL_Init(SDL_INIT_VIDEO) < 0 ) printf("SDL_Init fail: %s\n", SDL_GetError());
/* Video Subsystem: set up width, height, bits per pixel (0 = current display's);
Create an OpenGL rendering context */
if (SDL_SetVideoMode(800, 600, 0, SDL_OPENGL) == NULL) printf("SDL_SetVideoMode fail: %s\n", SDL_GetError());
// Title and icon text of window
SDL_WM_SetCaption("gl", NULL);
// Initialize OpenGL ..
initGL();
bool done = false;
// Loop indefinitely unless user quits
while (!done) {
// Draw OpenGL ..
drawGL();
// Deal with SDL events
SDL_Event sdl_event;
do {
if ( sdl_event.type == SDL_QUIT || (sdl_event.type == SDL_KEYDOWN && sdl_event.key.keysym.sym == SDLK_ESCAPE)) {
done = true;
break;
}
} while (SDL_PollEvent(&sdl_event));
}
// Clean SDL initialized systems, unload library and return.
SDL_Quit();
return 0;
}
GLuint program;
GLuint buffer;
#define BUFFER_OFFSET(i) ((char *)NULL + (i))
void initGL(void) {
// Generate 1 buffer object; point its name (in uint form) to *buffer.
glGenBuffers(1, &buffer); if(glGetError()) printf("glGenBuffers error\n");
/* bind the named (by a uint (via the previous call)) buffer object to target GL_ARRAY_BUFFER (target for vertices)
apparently, one object is bound to a target at a time. */
glBindBuffer(GL_ARRAY_BUFFER, buffer); if(glGetError()) printf("glBindBuffer error\n");
/* Create a data store for the current object bound to GL_ARRAY_BUFFER (from above), of a size 8*size of GLfloat,
with no initial data in it (NULL) and a hint to the GrLib that data is going to be modified once and used a
lot (STATIC), and it's going to be modified by the app and used by the GL for drawing or image specification (DRAW)
Store is not mapped yet. */
glBufferData( GL_ARRAY_BUFFER, 4 * 2 * sizeof(GLfloat), NULL, GL_STATIC_DRAW); if(glGetError()) printf("glBufferData error\n");
/* Actually map to the GL client's address space the data store currently bound to GL_ARRAY_BUFFER (from above).
Write only. */
GLfloat *data = (GLfloat *) glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY); if (!*data) printf("glMapBuffer error1\n"); if(glGetError()) printf("glMapBuffer error2\n");
// Apparently, write some data on the object.
data[0] = -0.75f; data[1] = -0.75f; data[2] = -0.75f; data[3] = 0.75f;
data[4] = 0.75f; data[5] = 0.75f; data[6] = 0.75f; data[7] = -0.75f;
// Unmap the data store. Required *before* the object is used.
if(!glUnmapBuffer(GL_ARRAY_BUFFER)) printf("glUnmapBuffer error\n");
// Specify the location and data format of an array of generic vertex attributes ..
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// the shaders source
GLchar *vertex_shader_code[] = { "void main(void) { gl_Position = gl_Vertex; }"};
GLchar *fragment_shader_code[] = { "void main(void) { gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0); }"};
/* Create an empty shader object; used to maintain the source string; intended to run
on the programmable vertex processor; GL_SHADER_TYPE is set to GL_VERTEX_SHADER
(e.g. for use on glGetShaderiv)*/
GLuint vs = glCreateShader(GL_VERTEX_SHADER); if (!vs) printf("glCreateShader fail\n");
/* Set the source code in vs; 1 string; GLchar **vertex_shader_code array of pointers to strings,
length is NULL, i.e. strings assumed null terminated */
glShaderSource(vs, 1, (const GLchar **) &vertex_shader_code, NULL); if(glGetError()) printf("glShaderSource error\n");
// Actually compile the shader
glCompileShader(vs); GLint compile_status; glGetShaderiv(vs, GL_COMPILE_STATUS, &compile_status); if (compile_status == GL_FALSE) printf("vertex_shader_code compilation fail\n"); if(glGetError()) printf("glGetShaderiv fail\n");
// same
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER); if (!fs) printf("glCreateShader fail\n");
// same
glShaderSource(fs, 1, (const GLchar **) &fragment_shader_code, NULL); if(glGetError()) printf("glShaderSource error\n");
// same
glCompileShader(fs); glGetShaderiv(fs, GL_COMPILE_STATUS, &compile_status); if (compile_status == GL_FALSE) printf("fragment_shader_code compilation fail\n"); if(glGetError()) printf("glGetShaderiv fail\n");
/* Empty program for later attachment of shaders; it provides management mechanism for them.
Shaders can be compiled before or after their attachment. */
program = glCreateProgram(); if(!program) printf("glCreateProgram fail1\n"); if(glGetError()) printf("glCreateProgram fail2\n");
/* Attach shaders to program; this could be done before their compilation or their association with code
Destined to be linked together and form an executable. */
glAttachShader(program, vs); if(glGetError()) printf("glAttachShader fail1\n");
glAttachShader(program, fs); if(glGetError()) printf("glAttachShader fail2\n");
// Link the program; vertex shader objects create an executable for the vertex processor and similarly for fragment shaders.
glLinkProgram(program); GLint link_status; glGetProgramiv(program, GL_LINK_STATUS, &link_status); if (!link_status) printf("linking fail\n"); if(glGetError()) printf("glLinkProgram fail\n");
/* Get info log, if any (supported by the standard to be empty).
It does give nice output if compilation or linking fails. */
GLchar infolog[2048];
glGetProgramInfoLog(program, 2048, NULL, infolog); printf("%s", infolog); if (glGetError()) printf("glGetProgramInfoLog fail\n");
/* Install program to rendering state; one or more executables contained via compiled shaders inclusion.
Certain fixed functionalities are disabled for fragment and vertex processors when such executables
are installed, and executables may reimplement them. See glUseProgram manual page about it. */
glUseProgram(program); if(glGetError()) printf("glUseProgram fail\n");
}
void drawGL(void) {
// Clear color buffer to default value
glClear(GL_COLOR_BUFFER_BIT); if(glGetError()) printf("glClear error\n");
// Render the a primitive triangle
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); if(glGetError()) printf("glDrawArrays error\n");
SDL_GL_SwapBuffers();
}
Expanding on Calvin1602's answer:
ftransform supposes matrices, which you do not use. gl_Vertex ought to be fine here, considering the final result is supposed to be the [-1:1]^3 cube, and his data is in that interval. Now, it should be gl_VertexAttrib[0] if you really want to go all GL3.1, but gl_Vertex and gl_VertexAttrib[0] alias(see below).
As for the enable. You use vertex attrib 0, so you need:
glEnableVertexAttribArray(0)
An advice on figuring stuff out in general: don't clear to black, it makes life more difficult to figure out if something black is drawn or if nothing is drawn (use glClearColor to change that).
On the pedantic side, your glShaderSource calls look suspicious, as you cast pointers around. I'd clean it up with
glShaderSource(fs, 1, fragment_shader_code, NULL);
The reason why it currently work with &fragment_shader_code is interesting, but here, I don't see why you don't simplify.
== edit to add ==
Gah, not sure what I was thinking with gl_VertexAttrib. It's been a while I did not look at this, and I just made my own feature...
The standard way to provide non-built-in attributes is actually non-trivial until GL4.1.
// glsl
attribute vec4 myinput;
gl_Position = myinput;
// C-code, rely on linker for location
glLinkProgram(prog);
GLint location = glGetAttribLocation(prog, "myinput");
glEnableVertexAttribArray(location, ...)
// alternative C-code, specify location
glBindAttribLocation(prog, 0, "myinput");
glLinkProgram(prog);
glEnableVertexAttribArray(0, ...)
GL4.1 finally supports specifying the location directly in the shader.
// glsl 4.10
layout (location=0) in vec4 myinput;
In the vertex shader : gl_Position = ftransform(); instead of gl_Vertex. This will multiply the input vector by the modelview matrix (giving the point in camera space) and then by the transformation matrix (giving the point in normalized device coordinates, i.e. its position on the screen)
glEnable(GL_VERTEX_ARRAY); before the rendering. cf the glDrawArray reference : "If GL_VERTEX_ARRAY is not enabled, no geometric primitives are generated."
... I don't see anything else