openGL: glGenVertexArrays Access violation executing location 0x00000000 - c++

I'm trying to learn openGL myself so I bought a book about openGL and in first chapter are example code so I try it and something went wrong. At line 17(glGenVertexArrays(NumVAOs, VAOs);) and 18(glBindVertexArray(VAOs[Triangles]);) is VS 2013 Ultimate report an error exactly "Unhandled exception at 0x77350309 in openGL_3.exe: 0xC0000005: Access violation executing location 0x00000000.". So I think it's something wrong with memory but i do not know what. Can someone help me?
#include <iostream>
using namespace std;
#include <vgl.h>
#include <LoadShaders.h>
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
void init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, // Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, // Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << endl;
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}

You can try setting:
glewExperimental = GL_TRUE;
before your call to glewInit(). Sources:
https://stackoverflow.com/a/20822876/833188
https://stackoverflow.com/a/22820845/833188

Are you sure, that you can request version 4.3 before call to glewInit()? Every version >= 3.0 requires WGL_ARB_create_context (Windows)/GLX_ARB_create_context (Linux), which is an extension.
Usually, to create "modern" OpenGL context (3.0+) it is required to:
Create temporary context, set it as current.
Initialize extensions (either manually or with loader like GLEW or GLFW).
Request desired version (and profile, if you create version 3.2 or higher and WGL_ARB_create_context_profile/GLX_ARB_create_context_profile is present).
Delete temporary context from 1. and bind your new context.
You may want to look at:
OpenGL 3.0 Context Creation (GLX)
OpenGL 3.1 The First Triangle (WGL)
I do not know, what is the connection between requesting context version in GLUT and GLEW initialization (I often use GLEW, but creation of context is something, that I always do manually, with platform-specific API), but obviously, pointers to new OpenGL API are not initialized when you call glGenVertexArrays. That is the reason of your error - you try to call function via pointer, which is NULL.

I generally agree to Sga's answer recommending to set glewExperimental=GL_TRUE before calling glewInit(). GLEW will fail to initialize in a core profile if this option is not set.
However, the fact that the glewInit() does not fail implies that the OP did not get a core profile at all (or that GLEW has finally been fixed, but that is more of a theoretical possibility).
I already had a look into freeglut's implementation of the (freeglut-specific) glutInitContextVersion API for the question "Where is the documentation for glutInitContextVersion?", and the conclusions from back then might be helpful in this case. To quote myself:
From looking into the code (for the current stable version 2.8.1), one
will see that freeglut implements the following logic: If the
implementation cannot fullfill the version constraints, but does
support the ARB_create_context extension, it will generate some error
and no context will be created. However, if a version is requested,
but the implementation does not even support the relevant extensions,
a legacy GL context is created, effectively ignoring the version
request completely.
So from the reported behavior, I deduce that the OP did only get a legacy context, possibly even Microsofts default OpenGL 1.1 implementation.
This also expalins why glGenVertexArrays() is a NULL pointer even after glewInit() succeeded: The extension is not supported for this context.
You should check what glGetString(GL_VERSION) and glGetString(GL_RENDERER) actually return right after the glutCreateWindow() call. And depending on the output, you might consider checking/updating your graphics drivers, or you might learn that your GPU is just not capable of modern GL at all.

Related

OpenGL 3 Rendering Problems Linux (Ubuntu Mate)

I'm unable to get OpenGL (with GLFW) to render content to the screen. I'm not even able set a clear color and have that be displayed when I run my application, I'm just consistently presented with a black screen.
I have installed requisite dependencies on my system and set up the build environment such that I'm able to successfully compile my applications (and dependencies) without error. Here is a snippet of the problematic code... You will note much of the rendering code has actually been commented out. For now it will be sufficient to just have the Clear Color I chose displayed to verify that everything is set up correctly:
// Include standard headers
#include <stdio.h>
#include <stdlib.h>
//Include GLEW. Always include it before gl.h and glfw3.h, since it's a bit magic.
#include <GL/glew.h>
// Include GLFW
#include <GLFW/glfw3.h>
// Include GLM
#include <glm/glm.hpp>
#include <GL/glu.h>
#include<common/shader.h>
#include <iostream>
using namespace glm;
int main()
{
// Initialise GLFW
glewExperimental = true; // Needed for core profile
if( !glfwInit() )
{
fprintf( stderr, "Failed to initialize GLFW\n" );
return -1;
}
// Open a window and create its OpenGL context
GLFWwindow* window; // (In the accompanying source code, this variable is global for simplicity)
window = glfwCreateWindow( 1024, 768, "Tutorial 02", NULL, NULL);
if( window == NULL ){
fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window); // Initialize GLEW
//glewExperimental=true; // Needed in core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}
//INIT VERTEX ARRAY OBJECT (VAO)...
//create Vertex Array Object (VAO)
GLuint VertexArrayID;
//Generate 1 buffer, put the resulting identifier in our Vertex array identifier.
glGenVertexArrays(1, &VertexArrayID);
//Bind the Vertex Array Object (VAO) associated with the specified identifier.
glBindVertexArray(VertexArrayID);
// Create an array of 3 vectors which represents 3 vertices
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
//INIT VERTEX BUFFER OBJECT (VBO)...
// This will identify our vertex buffer
GLuint VertexBufferId;
// Generate 1 buffer, put the resulting identifier in VertexBufferId
glGenBuffers(1, &VertexBufferId);
//Bind the Vertex Buffer Object (VBO) associated with the specified identifier.
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferId);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
//Compile our Vertex and Fragment shaders into a shader program.
/**
GLuint programId = LoadShaders("../tutorial2-drawing-triangles/SimpleVertexShader.glsl","../tutorial2-drawing-triangles/SimpleFragmentShader.glsl");
if(programId == -1){
printf("An error occured whilst attempting to load one or more shaders. Exiting....");
exit(-1);
}
//glUseProgram(programId); //use our shader program
*/
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
do{
// Clear the screen. It's not mentioned before Tutorial 02, but it can cause flickering, so it's there nonetheless.
glClearColor(8.0f, 0.0f, 0.0f, 0.3f);
//glClearColor(1.0f, 1.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// DRAW OUR TRIANGE...
/**
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferId);
glEnableVertexAttribArray(0); // 1st attribute buffer : vertices
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// plot the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0); //clean up attribute array
*/
// Swap buffers
glfwSwapBuffers(window);
//poll for and process events.
glfwPollEvents();
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0 );
}
Again, pretty straight forward as far as OpenGL goes, all rendering logic, loading of shaders,etc has been commented out I'm just trying to set a clear color and have it displayed to be sure my environment is configured correctly. To build the application I'm using QTCreator with a custom CMAKE file. I can post the make file if you think it may help determine the problem.
So I managed to solve the problem. I'll attempt to succinctly outline the source of the problem and how I arrived at a resolution in the hope that it may be useful to others that encounter the same issue:
In a nutshell, the source of the problem was a driver issue, I neglected to mention that I was actually running OpenGL inside an Ubuntu Mate 18.0 VM (via Parallels 16) on a MacBook Pro (with dedicated graphics) Therein, lies the problem; up until very recently both Parallels and Ubuntu simply did not support more modern, OpenGL 3.3 and upwards. I discovered this by adding the following lines to the posted code in order to force a specific OpenGL version:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
On doing this the application immediately begin to crash and glGetError() reported that I needed to downgrade to an earlier version of OpenGL as 3.3 was not compatible with my system.
The solution was two-fold:
Update Parallels to version 17 which now includes a dedicated, third-party virtual GPU (virGL) that is capable of running OpenGL 3.3 code.
Update Ubuntu or at the very least the kernel as virGL only works with linux kernel versions 5.10 and above. (Ubuntu Mate 18 only ships with kernel version 5.04.)
Thats it, making the changes, as described, enabled me to run the code exactly as posted and successfully render a basic triangle to the screen.

OpenGL Red Book with Mac OS X

I would like to work through the OpenGL Red Book, The OpenGL Programming Guide, 8th edition, using Xcode on Mac OS X.
I am unable to run the first code example, triangles.cpp. I have tried including the GLUT and GL frameworks that come with Xcode and I have searched around enough to see that I am not likely to figure this out on my own.
Assuming that I have a fresh installation of Mac OS X, and I have freshly installed Xcode with Xcode command-line tools, what are the step-by-step instructions to be able to run triangles.cpp in that environment?
Unlike this question, my preference would be not to use Cocoa, Objective-C or Swift. My preference would be to stay in C++/C only. An answer is only correct if I can follow it step-by-step and end up with a running triangles.cpp program.
My preference is Mac OS X 10.9, however a correct answer can assume 10.9, 10.10 or 10.11.
Thank you.
///////////////////////////////////////////////////////////////////////
//
// triangles.cpp
//
///////////////////////////////////////////////////////////////////////
#include <iostream>
using namespace std;
#include "vgl.h"
#include "LoadShader.h"
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
//---------------------------------------------------------------------
//
// init
//
void
init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, // Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, // Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(*shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
//---------------------------------------------------------------------
//
// display
//
void
display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
//---------------------------------------------------------------------
//
// main
//
int
main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
glewExperimental = GL_TRUE;
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << endl;
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
Edit 1: In response to the first comment, here is the naive effort.
Open Xcode 5.1.1 on Mac OS X 10.9.5
Create a new C++ Command-line project.
Paste over the contents of main.cpp with the contents of triangles.cpp.
Click on the project -> Build Phases -> Link Binary with Libraries
Add OpenGL.framework and GLUT.framework
Result: "/Users/xxx/Desktop/Triangles/Triangles/main.cpp:10:10: 'vgl.h' file not found"
Edit 2: Added the vgh translation unit and LoadShaders translation unit, also added libFreeGlut.a and libGlew32.a to my projects compilation/linking. Moved all of the OpenGL Book's Include contents to my projects source directory. Had to change several include statements to use quoted includes instead of angled includes. It feels like this is closer to working but it is unable to find LoadShader.h. Note that the translation unit in the OpenGL download is called LoadShaders (plural). Changing triangles.cpp to reference LoadShaders.h fixed the include problem but the contents of that translation unit don't seem to match the signatures of whats being called from triangles.cpp.
There are some issues with the source and with the files in oglpg-8th-edition.zip:
triangles.cpp uses non-standard GLUT functions that aren't included in glut, and instead are only part of the freeglut implementation (glutInitContextVersion and glutInitContextProfile). freeglut doesn't really support OS X and building it instead relies on additional X11 support. Instead of telling you how to do this I'm just going to modify the source to build with OS X's GLUT framework.
The code depends on glew, and the book's source download apparently doesn't include a binary you can use, so you'll need to build it for yourself.
Build GLEW with the following commands:
git clone git://git.code.sf.net/p/glew/code glew
cd glew
make extensions
make
Now:
Create a C++ command line Xcode project
Set the executable to link with the OpenGL and GLUT frameworks and the glew dylib you just built.
Modify the project "Header Search Paths" to include the location of the glew headers for the library you built, followed by the path to oglpg-8th-edition/include
Add oglpg-8th-edition/lib/LoadShaders.cpp to your xcode project
Paste the triangles.cpp source into the main.cpp of your Xcode project
Modify the source: replace #include "vgl.h" with:
#include <GL/glew.h>
#include <OpenGL/gl3.h>
#include <GLUT/glut.h>
#define BUFFER_OFFSET(x) ((const void*) (x))
Also make sure that the typos in the version of triangle.cpp that you include in your question are fixed: You include "LoadShader.h" when it should be "LoadShaders.h", and LoadShaders(*shaders); should be LoadShaders(shaders). (The code printed in my copy of the book doesn't contain these errors.)
Delete the calls to glutInitContextVersion and glutInitContextProfile.
Change the parameter to glutInitDisplayMode to GLUT_RGBA | GLUT_3_2_CORE_PROFILE
At this point the code builds, links, and runs, however running the program displays a black window for me instead of the expected triangles.
about fixing the black window issue as mentioned in Matthew and Bames53 comments
Follow bames53's answer
Define shader as string
const char *pTriangleVert =
"#version 410 core\n\
layout(location = 0) in vec4 vPosition;\n\
void\n\
main()\n\
{\n\
gl_Position= vPosition;\n\
}";
const char *pTriangleFrag =
"#version 410 core\n\
out vec4 fColor;\n\
void\n\
main()\n\
{\n\
fColor = vec4(0.0, 0.0, 1.0, 1.0);\n\
}";
OpenGl 4.1 supported on my iMac so i change version into 410
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, pTriangleVert},
{ GL_FRAGMENT_SHADER, pTriangleFrag },
{ GL_NONE, NULL }
};
Modify the ShaderInfo struct slightly
change
typedef struct {
GLenum type;
const char* filename;
GLuint shader;
} ShaderInfo;
into
typedef struct {
GLenum type;
const char* source;
GLuint shader;
} ShaderInfo;
Modify loadShader function slightly
comment the code about reading shader from file
/*
const GLchar* source = ReadShader( entry->filename );
if ( source == NULL ) {
for ( entry = shaders; entry->type != GL_NONE; ++entry ) {
glDeleteShader( entry->shader );
entry->shader = 0;
}
return 0;
}
glShaderSource( shader, 1, &source, NULL );
delete [] source;*/
into
glShaderSource(shader, 1, &entry->source, NULL);
you'd better turning on DEBUG in case some shader compiling errors
you can use example from this link. It's almost the same. It uses glfw instead of glut.
http://www.tomdalling.com/blog/modern-opengl/01-getting-started-in-xcode-and-visual-cpp/
/*
main
Copyright 2012 Thomas Dalling - http://tomdalling.com/
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
//#include "platform.hpp"
// third-party libraries
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
// standard C++ libraries
#include <cassert>
#include <iostream>
#include <stdexcept>
#include <cmath>
// tdogl classes
#include "Program.h"
// constants
const glm::vec2 SCREEN_SIZE(800, 600);
// globals
GLFWwindow* gWindow = NULL;
tdogl::Program* gProgram = NULL;
GLuint gVAO = 0;
GLuint gVBO = 0;
// loads the vertex shader and fragment shader, and links them to make the global gProgram
static void LoadShaders() {
std::vector<tdogl::Shader> shaders;
shaders.push_back(tdogl::Shader::shaderFromFile("vertex-shader.txt", GL_VERTEX_SHADER));
shaders.push_back(tdogl::Shader::shaderFromFile("fragment-shader.txt", GL_FRAGMENT_SHADER));
gProgram = new tdogl::Program(shaders);
}
// loads a triangle into the VAO global
static void LoadTriangle() {
// make and bind the VAO
glGenVertexArrays(1, &gVAO);
glBindVertexArray(gVAO);
// make and bind the VBO
glGenBuffers(1, &gVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
// Put the three triangle verticies into the VBO
GLfloat vertexData[] = {
// X Y Z
0.0f, 0.8f, 0.0f,
-0.8f,-0.8f, 0.0f,
0.8f,-0.8f, 0.0f,
};
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
// connect the xyz to the "vert" attribute of the vertex shader
glEnableVertexAttribAxrray(gProgram->attrib("vert"));
glVertexAttribPointer(gProgram->attrib("vert"), 3, GL_FLOAT, GL_FALSE, 0, NULL);
// unbind the VBO and VAO
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// draws a single frame
static void Render() {
// clear everything
glClearColor(0, 0, 0, 1); // black
glClear(GL_COLOR_BUFFER_BIT);
// bind the program (the shaders)
glUseProgram(gProgram->object());
// bind the VAO (the triangle)
glBindVertexArray(gVAO);
// draw the VAO
glDrawArrays(GL_TRIANGLES, 0, 3);
// unbind the VAO
glBindVertexArray(0);
// unbind the program
glUseProgram(0);
// swap the display buffers (displays what was just drawn)
glfwSwapBuffers(gWindow);
}
void OnError(int errorCode, const char* msg) {
throw std::runtime_error(msg);
}
// the program starts here
void AppMain() {
// initialise GLFW
glfwSetErrorCallback(OnError);
if(!glfwInit())
throw std::runtime_error("glfwInit failed");
// open a window with GLFW
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
gWindow = glfwCreateWindow((int)SCREEN_SIZE.x, (int)SCREEN_SIZE.y, "OpenGL Tutorial", NULL, NULL);
if(!gWindow)
throw std::runtime_error("glfwCreateWindow failed. Can your hardware handle OpenGL 3.2?");
// GLFW settings
glfwMakeContextCurrent(gWindow);
// initialise GLEW
glewExperimental = GL_TRUE; //stops glew crashing on OSX :-/
if(glewInit() != GLEW_OK)
throw std::runtime_error("glewInit failed");
// print out some info about the graphics drivers
std::cout << "OpenGL version: " << glGetString(GL_VERSION) << std::endl;
std::cout << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
std::cout << "Vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "Renderer: " << glGetString(GL_RENDERER) << std::endl;
// make sure OpenGL version 3.2 API is available
if(!GLEW_VERSION_3_2)
throw std::runtime_error("OpenGL 3.2 API is not available.");
// load vertex and fragment shaders into opengl
LoadShaders();
// create buffer and fill it with the points of the triangle
LoadTriangle();
// run while the window is open
while(!glfwWindowShouldClose(gWindow)){
// process pending events
glfwPollEvents();
// draw one frame
Render();
}
// clean up and exit
glfwTerminate();
}
int main(int argc, char *argv[]) {
try {
AppMain();
} catch (const std::exception& e){
std::cerr << "ERROR: " << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
I have adapted the project for MAC here:
https://github.com/badousuan/openGLredBook9th
The project can build successfully and most demo can run as expected. However the original code is based on openGL 4.5,while MAC only support version 4.1,some new API calls may fail. If some target not work well, you should consider this version issue and make some adaptation
I use the code from this tutorial: http://antongerdelan.net/opengl/hellotriangle.html, and it works on my mac.
Here is the code I run.
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main() {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow(640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit();
// get version info
const GLubyte* renderer = glGetString(GL_RENDERER); // get renderer string
const GLubyte* version = glGetString(GL_VERSION); // version as a string
printf("Renderer: %s\n", renderer);
printf("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable(GL_DEPTH_TEST); // enable depth-testing
glDepthFunc(GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
GLuint vbo = 0; // vertex buffer object
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), points, GL_STATIC_DRAW);
GLuint vao = 0; // vertex array object
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(0.5, 0.0, 0.5, 1.0);"
"}";
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vertex_shader, NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fs, 1, &fragment_shader, NULL);
glCompileShader(fs);
GLuint shader_programme = glCreateProgram();
glAttachShader(shader_programme, fs);
glAttachShader(shader_programme, vs);
glLinkProgram(shader_programme);
while(!glfwWindowShouldClose(window)) {
// wipe the drawing surface clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(shader_programme);
glBindVertexArray(vao);
// draw points 0-3 from the currently bound VAO with current in-use shader
glDrawArrays(GL_TRIANGLES, 0, 3);
// update other events like input handling
glfwPollEvents();
// put the stuff we've been drawing onto the display
glfwSwapBuffers(window);
}
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}

OpenGL Segmentation Fault

I'm following the OpenGL Programming Guide:8th edition, and I'm kind of stuck in the 1st chapter (triangles.cpp).
I'm running Ubuntu 14.04.2, and this is the code used:
// Draws 2 triangles on the screen
#include <iostream>
using namespace std;
#include "vgl.h"
#include "LoadShaders.h"
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
// init
void init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, //Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, //Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "trianges.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
// display
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
// main
int main (int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << '\n';
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
This code compiles successfully, but yields a Segmentation Fault on execution.
I used gdb to debug the error. A random window does pop up (only when I used gdb), but its simply displaying what's behind the window (so it's crashed, doesn't display the two triangles).
This is the output from gdb:
Traceback (most recent call last):
File "/usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19-gdb.py", line 63, in <module>
from libstdcxx.v6.printers import register_libstdcxx_printers
ImportError: No module named 'libstdcxx'
Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
The program should be displaying 2 triangles on a window.
I'm relatively new to C++, and I'm very new to linking libraries and debugging with gdb. I've seen many Q&A on the Segmentation Fault. A person did have the exact same problem as I did, but unfortunately, he deleted the question.
I don't really feel like this is a duplicate, since I didn't really understand most of the answers I've read on the same topic. Please make the answer as simple as possible.
I'm using the freeglut3-dev library, and I'm using NVIDIA Driver version 346.59.
EDIT:
The exact error happens at init() in
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
The segmentation fault occurred at the very first command that used GL. So, I made sure to import freeglut, which included all the necessary libraries for opengl too.
#include freeglut
I would like to thank Mateusz Grzejek for pointing this out.

OpenGL refuses to draw a triangle

I'm attempting to draw a single large triangle in a window in OpenGL. My program compiles and runs, but I get just a black screen in my window.
I've checked and double-checked multiple tutorials and it seems like my steps are correct... Am I missing something obvious?
Here is the program in its entirety:
#include <stdlib.h>
#include <stdio.h>
#include <GL/glew.h>
#include <GLUT/glut.h>
GLuint VBO;
struct vector {
float _x;
float _y;
float _z;
vector() { }
vector(float x, float y, float z) { _x = x; _y = y; _z = z; }
};
void render()
{
glClear(GL_COLOR_BUFFER_BIT);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
glutSwapBuffers();
}
void create_vbo()
{
vector verts[3];
verts[0] = vector(-1.0f, -1.0f, 0.0f);
verts[1] = vector(1.0f, -1.0f, 0.0f);
verts[2] = vector(0.0f, 1.0f, 0.0f);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
glutInitWindowSize(1024, 768);
glutInitWindowPosition(100, 100);
glutCreateWindow("Triangle Test");
glutDisplayFunc(render);
glewInit();
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
create_vbo();
glutMainLoop();
return 0;
}
Update: It turns out that drawing this way without a "program" (that is, compiled shader files) produces undefined behavior (the newer your graphics card, the more likely it is to work, however).
Because my card is right on the edge and only supports OpenGL 2.1, it was a little difficult to find an appropriate shader example that would work -- seems like there are many different tutorials out there written at different stages in the evolution of OpenGL.
My vertex shader (entire file):
void main()
{
gl_Position = ftransform();
}
My fragment shader (entire file):
void main()
{
gl_FragColor = vec4(0.4,0.4,0.8,1.0);
}
I used the example LoadShaders function from this OpenGL Tutorial Site to create the program, and now, I, too, can see the triangle!
(Thanks to #chbaker0 for pointing me in the right direction.)
I do not know if this will help you or not but in your create_vbo() function where you have:
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
try this instead:
glBufferData( GL_ARRAY_BUFFER, sizeof( verts[0] * 3 ), &verts[0], GL_STATIC_DRAW );
And after this function call add in this function call to the end of your create_vbo() function
// This MUST BE LAST! Used to Stop The Buffer!
glBindBuffer( GL_ARRAY_BUFFER, 0 );
It is hard for me to see your error. In my projects I do have some vbos, but I am also using vaos as well. My code is able to working in OpenGL 2.0 - 4.5 but for the older versions there is a split in logic because of the deprecated functions within the API. I also do not use glut. I hope this helps.
The other thing I noticed too is did you pay attention to your vertex winding order? Meaning are they being used by OpenGL in a CCW order or CW order? Is back face culling turned on or off? There are a lot of elements to consider when setting up and configuring an OpenGL context. It has been a while since I worked with older versions of OpenGL but I do know that once you start working with a specific version or newer you will have to supply your own model view projection matrix, just something to consider.
The issue I ran into was using pipeline features without defining a shader program. The spec says this should work, but on my graphics card it did not did. (See my update in the question for more specifics).
Thanks to all the commenters for nudging me in the right direction.

OpenGL Programming Guild Eighth Edition, sample programs and 'NumVAOs'

For anyone who has seen my previous questions, after working through the RedBook for Version 2.1, I am now moving on to Version 4.3. (Hoary you say, since many of you have been telling me to do this for ages.)
So, I am deep into Chapter 3, but still haven't got Chapter 1 's example program working.
I have two problems. (Actually 3.) Firstly, it doesn't compile. Okay so that's a problem, but kind of irrelevant considering the next two. Secondly, I don't exactly understand how it works or what it is trying to do, but we will get onto that.
Thirdly, it seems to me that the author of this code is a complete magician. I would suggest all sorts of tinkery-hackery are occurring here. This is most likely to be because of Problem Number 2, the fact that I don't understand what it is trying to do. The guys who wrote this book are, of course, not idiots, but bear with me, I will give an example.
Here is a section of code taken from the top of the main.cpp file. I will include the rest of the file later on, but for now:
enum VAO_IDs {
Triangles,
NumVAOs
};
If I understand correctly, this gives VAO_IDs::Triangles the value of 1, since enum's are zero based. (I hope I am correct here, or it will be embarrassing for me.)
A short while later, you can see this line:
GLuint VAOs[NumVAOs];
Which declares an array of GLuint's, containing 1 GLuint's due to the fact that NumVAOs is equal to 1. Now, firstly, shouldn't it be VAO_IDs::NumVAOs?
And secondly, why on earth has an enum been used in this way? I would never use an enum like that for obvious reasons - cannot have more than one data with the same value, values are not explicitly specified etc...
Am I barking up the right tree here? It just doesn't make sense to do this... VAOs should have been a global, like this, surely? GLuint NumVAOs = 1; This is just abusive to the enum!
In fact, below the statement const GLuint NumVertices = 6; appears. This makes sense, doesn't it, because we can change the value 6 if we wanted to, but we cannot change NumVAOs to 0 for example, because Triangles is already set to 0. (Why is it in an enum? Seriously?)
Anyway, forget the enum's... For now... Okay so I made a big deal out of that and that's the end of the problems... Any further comments I have are in the code now. You can ignore most of the glfw stuff, its essentially the same as glut.
// ----------------------------------------------------------------------------
//
// Triangles - First OpenGL 4.3 Program
//
// ----------------------------------------------------------------------------
#include <cstdlib>
#include <cstdint>
#include <cmath>
#include <stdio.h>
#include <iostream>
//#include <GL/gl.h>
//#include <GL/glu.h>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
/// OpenGL specific
#include "vgl.h"
#include "LoadShaders.h" // These are essentially empty files with some background work going on, nothing declared or defined which is relevant here
enum VAO_IDs {
Triangles,
NumVAOs
};
// So Triangles = 0, NumVAOs = 1
// WHY DO THIS?!
enum Buffer_IDs {
ArrayBuffer,
NumBuffers
};
enum Attrib_IDs {
vPosition = 0
}
// Please, please, please someone explain the enum thing to me, why are they using them instead of global -just- variables.
// (Yeah an enum is a variable, okay, but you know what I mean.)
GLuint VAOs[NumVAOs]; // Compile error: expected initializer before 'VAOs'
GLuint Buffers[NumBuffers]; // NumBuffers is hidden in an enum again, so it NumVAOs
const GLuint NumVertices = 6; // Why do something different here?
// ----------------------------------------------------------------------------
//
// Init
//
// ----------------------------------------------------------------------------
void init()
{
glGenVertexArrays(NumVAOs, VAOs); // Error: VAOs was not declared in this scope
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 },
{ +0.85, -0.90 },
{ -0.90, +0.85 },
{ +0.90, -0.85 },
{ +0.90, +0.90 },
{ -0.85, +0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, nullptr }
};
GLuint program = LoadShaders(shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
// ----------------------------------------------------------------------------
//
// Display
//
// ----------------------------------------------------------------------------
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices); // Error VAOs not declared
glFlush();
}
// ----------------------------------------------------------------------------
//
// Main
//
// ----------------------------------------------------------------------------
void error_handle(int error, const char* description)
{
fputs(description, stderr);
}
void key_handle(GLFWwindow* window, int key, int scancode, int action, int mods)
{
if(key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
}
void handle_exit()
{
}
int main(int argc, char **argv)
{
// Setup exit function
atexit(handle_exit);
// GLFW Window Pointer
GLFWwindow* window;
// Setup error callback
glfwSetErrorCallback(error_handle);
// Init
if(!glfwInit())
{
exit(EXIT_FAILURE);
}
// Setup OpenGL
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_DEPTH_TEST);
// Set GLFW window hints
glfwWindowHint(GLFW_DEPTH_BITS, 32);
glfwWindowHint(GLFW_RED_BITS, 8);
glfwWindowHint(GLFW_GREEN_BITS, 8);
glfwWindowHint(GLFW_BLUE_BITS, 8);
glfwWindowHint(GLFW_ALPHA_BITS, 8);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
//glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, 1);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// Init GLEW
if(glewInit())
{
printf("GLEW init failure!\n", stderr);
exit(EXIT_FAILURE);
}
// Init OpenGL
init();
// Create Window
window = glfwCreateWindow(800, 600, "Window Title", nullptr, nullptr);
if(!window)
{
glfwTerminate();
return EXIT_FAILURE;
}
// Make current
glfwMakeContextCurrent(window);
// Set key callback
glfwSetKeyCallback(window, key_handle);
// Check OpenGL Version
char* version;
version = (char*)glGetString(GL_VERSION);
printf("OpenGL Application Running, Version: %s\n", version);
// Enter main loop
while(!glfwWindowShouldClose(window))
{
// Event polling
glfwPollEvents();
// OpenGL Rendering
// Setup OpenGL viewport and clear screen
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / height;
glViewport(0, 0, width, height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Setup projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, ratio, 0.1, 10.0);
// Render
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Swap Buffers
glfwSwapBuffers(window);
}
// Free glfw memory allocated for window
glfwDestroyWindow(window);
// Exit
glfwTerminate();
exit(EXIT_SUCCESS);
}
A very verbose question I realize, but I thought it was important to explain why I think its crazy code rather than just saying "I don't get it" like it's easy to. Could someone please explain why these very clever people decided to do it this way and why there are errors. (I can find nothing about this online.)
The author is using the automatic numbering property of enums to automatically update the definition for the NumVAOs and NumBuffers values. For example, when new VAO IDs are added to the enum the NumVAOs value will still be correct as long as it is listed last in the enum.
enum VAO_IDs {
Triangles,
Polygons,
Circles,
NumVAOs
};
Most likely your compiler does not support this trick.