Minimal C++ OpenGL Segmentation Fault - c++

I am following the first instalment in Tom Dalling's Modern OpenGL tutorial and when I downloaded his code it worked perfectly. So then I wanted to create my own minimal version to help me understand how it all worked.
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <GLM/glm.hpp>
#include <stdexcept>
GLFWwindow* gWindow = NULL;
GLint gProgram = 0;
GLuint gVertexShader = 0;
GLuint gFragmentShader = 0;
GLuint gVAO = 0;
GLuint gVBO = 0;
int main() {
/** Initialise GLFW **/
if (!glfwInit()){
throw std::runtime_error("GLFW not initialised.");
}
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
gWindow = glfwCreateWindow(800, 600, "OpenGL 3.2", NULL, NULL);
if(!gWindow) {
throw std::runtime_error("glfwCreateWindow failed.");
}
glfwMakeContextCurrent(gWindow);
/** Initialise GLEW **/
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) {
throw std::runtime_error("Glew not initialised.");
}
if (!GLEW_VERSION_3_2) {
throw std::runtime_error("OpenGL 3.2 not supported.");
}
/** Set up shaders **/
gVertexShader = glCreateShader(GL_VERTEX_SHADER); //Segmentation fault here
gFragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(gVertexShader, 1, (const GLchar* const*)R"(
#version 150
in vec3 vertexPos
out vec colour;
void main() {
colour = vertexPos;
gl_Position = vec4(vertexPos, 1);
}
)", NULL);
glShaderSource(gFragmentShader, 1, (const GLchar* const*)R"(
#version 150
in vec3 colour;
out vec4 fragColour;
void main() {
fragColour = colour;
}
)", NULL);
glCompileShader(gVertexShader);
glCompileShader(gFragmentShader);
/** Set up program **/
gProgram = glCreateProgram();
glAttachShader(gProgram, gVertexShader);
glAttachShader(gProgram, gFragmentShader);
glLinkProgram(gProgram);
glDetachShader(gProgram, gVertexShader);
glDetachShader(gProgram, gFragmentShader);
glDeleteShader(gVertexShader);
glDeleteShader(gFragmentShader);
/** Set up VAO and VBO **/
float vertexData[] = {
// X, Y, Z
0.2f, 0.2f, 0.0f,
0.8f, 0.2f, 0.0f,
0.5f, 0.8f, 0.0f,
};
glGenVertexArrays(1, &gVAO);
glBindVertexArray(gVAO);
glGenBuffers(1, &gVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(GL_FLOAT)*9, vertexData, GL_STATIC_DRAW);
glVertexAttribPointer(glGetAttribLocation(gProgram, "vertexPos"), 3, GL_FLOAT, GL_FALSE, 0, NULL);
/** Main Loop **/
while (!glfwWindowShouldClose(gWindow)) {
glfwPollEvents();
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
glDrawArrays(GL_TRIANGLES, 0, 1);
glfwSwapBuffers(gWindow);
}
return 0;
}
This is what I created and it builds correctly. However when running it I get a segmentation fault upon calling glCreateShader.
I have done a lot of searching to find a solution to this and so far I've found two main reasons for this happening:
One is that glewExperimental needs to be set to GL_TRUE, however nothing changed after adding this line.
The second reason was that the OpenGL context was not being set, but I believe the call to glfwMakeContextCurrent covers this point.
From here I have no clue as of what to do, so any tips would be a great help!
EDIT: Thank you all for your help! I eventually managed to get it working and have uploaded the code to pastebin for future reference.

Problem solution
You program crashes not at the line you pointed, but on the first glShaderSource call. You actually pass a pointer to a string as the third argument, casting it to (const GLchar* const*). That's not what glShaderSource expects. It rather expects a pointer to an array of pointers to strings, whose (array of pointer's) length should be passed as the second argument (see documentation).
As a quick-n-dirty fix for your code, I declared the following:
const char *vs_source[] = {
"#version 150\n",
"in vec3 vertexPos\n",
"out vec colour;\n",
"void main() {\n",
" colour = vertexPos;\n",
" gl_Position = vec4(vertexPos, 1);\n",
"}\n"};
const char *fs_source[] = {
"#version 150\n",
"in vec3 colour;\n",
"out vec4 fragColour;\n",
"void main() {\n",
" fragColour = colour;\n",
"}\n"};
and replaced the calls to glShaderSource to
glShaderSource(gVertexShader, 7, (const char**)vs_source, NULL);
glShaderSource(gFragmentShader, 6, (const char**)fs_source, NULL);
Now the program works fine and shows a window as expected.
Hints
Your code did not even compile on my machine (I use gcc 4.8.3 # Linux) because of cast of a string literal to (const GLchar* const*). So try to set more restrictive flags in your compiler options and pay attention at compiler warnings. Moreover, treat warnings as errors.
Use some kind of memory profiler to determine the exact location of crashes. For example, I used Valgrind to localize the problem.

Related

Getting OpenGL ES Transform Feedback data back to the host using Emscripten

Using GLFW, the C++ OpenGL ES code below calculates the square roots of five numbers and outputs them on the command line. The code makes use of Transform Feedback. When I compile on Ubuntu 17.10, using the following command, I get the result I expect:
$ g++ tf.cpp -lGL -lglfw
If I use Emscripten, however, an exception is thrown, which indicates that
glMapBufferRange is only supported when access is MAP_WRITE|INVALIDATE_BUFFER. I do want to read rather than write, so perhaps I shouldn't use glMapBufferRange, but what can I use instead? I have tried on both Firefox and Chromium. The command I use to compile with Emscripten is:
$ em++ -std=c++11 tf.cpp -s USE_GLFW=3 -s USE_WEBGL2=1 -s FULL_ES3=1 -o a.out.html
The code follows:
#include <iostream>
#define GLFW_INCLUDE_ES3
#include <GLFW/glfw3.h>
#ifdef __EMSCRIPTEN__
#include <emscripten.h>
#endif
static const GLuint WIDTH = 800;
static const GLuint HEIGHT = 600;
static const GLchar *vertex_shader_src =
"#version 300 es\n"
"precision mediump float;\n"
"in float inValue;\n"
"out float outValue;\n"
"void main() {\n"
" outValue = sqrt(inValue);\n"
"}\n";
// Emscripten complains if there's no fragment shader
static const GLchar *fragment_shader_src =
"#version 300 es\n"
"precision mediump float;\n"
"out vec4 colour;\n"
"void main() {\n"
" colour = vec4(1.0, 1.0, 0.0, 1.0);\n"
"}\n";
static const GLfloat vertices[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
};
GLint get_shader_prog(const char *vert_src, const char *frag_src = "")
{
enum Consts { INFOLOG_LEN = 512 };
GLchar infoLog[INFOLOG_LEN];
GLint fragment_shader, vertex_shader, shader_program, success;
shader_program = glCreateProgram();
auto mk_shader = [&](GLint &shader, const GLchar **str, GLenum shader_type) {
shader = glCreateShader(shader_type);
glShaderSource(shader, 1, str, NULL);
glCompileShader(shader);
glGetShaderiv(shader, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(shader, INFOLOG_LEN, NULL, infoLog);
std::cout << "ERROR::SHADER::COMPILATION_FAILED\n" << infoLog << '\n';
}
glAttachShader(shader_program, shader);
};
mk_shader(vertex_shader, &vert_src, GL_VERTEX_SHADER);
mk_shader(fragment_shader, &frag_src, GL_FRAGMENT_SHADER);
const GLchar* feedbackVaryings[] = { "outValue" };
glTransformFeedbackVaryings(shader_program, 1, feedbackVaryings,
GL_INTERLEAVED_ATTRIBS);
glLinkProgram(shader_program);
glGetProgramiv(shader_program, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramInfoLog(shader_program, INFOLOG_LEN, NULL, infoLog);
std::cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << '\n';
}
glUseProgram(shader_program);
glDeleteShader(vertex_shader);
glDeleteShader(fragment_shader);
return shader_program;
}
int main(int argc, char *argv[])
{
glfwInit();
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
GLFWwindow *window = glfwCreateWindow(WIDTH, HEIGHT, __FILE__, NULL, NULL);
glfwMakeContextCurrent(window);
GLuint shader_prog = get_shader_prog(vertex_shader_src, fragment_shader_src);
GLint inputAttrib = glGetAttribLocation(shader_prog, "inValue");
glViewport(0, 0, WIDTH, HEIGHT);
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
GLfloat data[] = { 1.0f, 2.0f, 3.0f, 4.0f, 5.0f };
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW);
glVertexAttribPointer(inputAttrib, 1, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(inputAttrib);
GLuint tbo;
glGenBuffers(1, &tbo);
glBindBuffer(GL_TRANSFORM_FEEDBACK_BUFFER, tbo);
glBufferData(GL_TRANSFORM_FEEDBACK_BUFFER, sizeof(data),
nullptr, GL_STATIC_READ);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, tbo);
glEnable(GL_RASTERIZER_DISCARD);
glBeginTransformFeedback(GL_POINTS);
glDrawArrays(GL_POINTS, 0, 5);
glEndTransformFeedback();
glDisable(GL_RASTERIZER_DISCARD);
glFlush();
GLfloat feedback[5]{1,2,3,4,5};
void *void_buf = glMapBufferRange(GL_TRANSFORM_FEEDBACK_BUFFER,0,
sizeof(feedback), GL_MAP_READ_BIT);
GLfloat *buf = static_cast<GLfloat *>(void_buf);
for (int i = 0; i < 5; i++)
feedback[i] = buf[i];
glUnmapBuffer(GL_TRANSFORM_FEEDBACK_BUFFER);
for (int i = 0; i < 5; i++)
std::cout << feedback[i] << ' ';
std::cout << std::endl;
glDeleteBuffers(1, &vbo);
glDeleteBuffers(1, &tbo);
glDeleteVertexArrays(1, &vao);
glfwTerminate();
return 0;
}
As has been mentioned by another answer, WebGL 2.0 is close to OpenGL ES 3.0, but confusingly does not define glMapBufferRange() function, so that Emscripten tries emulating only part of functionality of this function.
Real WebGL 2.0, however, exposes glGetBufferSubData() analog, which does not exist in OpenGL ES, but exist in desktop OpenGL. The method can be wrapped in code via EM_ASM_:
void myGetBufferSubData (GLenum theTarget, GLintptr theOffset, GLsizeiptr theSize, void* theData)
{
#ifdef __EMSCRIPTEN__
EM_ASM_(
{
Module.ctx.getBufferSubData($0, $1, HEAPU8.subarray($2, $2 + $3));
}, theTarget, theOffset, theData, theSize);
#else
glGetBufferSubData (theTarget, theOffset, theSize, theData);
#endif
}
Fetching VBO data back is troublesome in multi-platform code:
OpenGL 1.5+
glGetBufferSubData(): YES
glMapBufferRange(): YES
OpenGL ES 2.0
glGetBufferSubData(): NO
glMapBufferRange(): NO
OpenGL ES 3.0+
glGetBufferSubData(): NO
glMapBufferRange(): YES
WebGL 1.0
glGetBufferSubData(): NO
glMapBufferRange(): NO
WebGL 2.0
glGetBufferSubData(): YES
glMapBufferRange(): NO
So that:
Desktop OpenGL gives maximum flexibility.
OpenGL ES 2.0 and WebGL 1.0 give no chance to retrieve data back.
OpenGL ES 3.0+ gives only mapping buffer.
WebGL 2.0 gives only getBufferSubData().
WebGL2 does not support MapBufferRange because it would be a security nightmare. Instead it supports getBufferSubData. I have no idea if that is exposed to WebAssembly in emscripten. Emscripten is emulating MapBufferRange for the cases you mentioned by using bufferData. See here and here
If getBufferSubData is not supported you can add it. See the code in library_gl.js for how readPixels is implemented and use that as inspiration for how to expose getBufferSubData to WebAssembly. Either that or add some inline JavaScript with the _EM_ASM. I haven't done it but googling for "getBufferSubData emscripten" bought up this gist

OpenGL Red Book with Mac OS X

I would like to work through the OpenGL Red Book, The OpenGL Programming Guide, 8th edition, using Xcode on Mac OS X.
I am unable to run the first code example, triangles.cpp. I have tried including the GLUT and GL frameworks that come with Xcode and I have searched around enough to see that I am not likely to figure this out on my own.
Assuming that I have a fresh installation of Mac OS X, and I have freshly installed Xcode with Xcode command-line tools, what are the step-by-step instructions to be able to run triangles.cpp in that environment?
Unlike this question, my preference would be not to use Cocoa, Objective-C or Swift. My preference would be to stay in C++/C only. An answer is only correct if I can follow it step-by-step and end up with a running triangles.cpp program.
My preference is Mac OS X 10.9, however a correct answer can assume 10.9, 10.10 or 10.11.
Thank you.
///////////////////////////////////////////////////////////////////////
//
// triangles.cpp
//
///////////////////////////////////////////////////////////////////////
#include <iostream>
using namespace std;
#include "vgl.h"
#include "LoadShader.h"
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
//---------------------------------------------------------------------
//
// init
//
void
init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, // Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, // Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(*shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
//---------------------------------------------------------------------
//
// display
//
void
display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
//---------------------------------------------------------------------
//
// main
//
int
main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
glewExperimental = GL_TRUE;
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << endl;
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
Edit 1: In response to the first comment, here is the naive effort.
Open Xcode 5.1.1 on Mac OS X 10.9.5
Create a new C++ Command-line project.
Paste over the contents of main.cpp with the contents of triangles.cpp.
Click on the project -> Build Phases -> Link Binary with Libraries
Add OpenGL.framework and GLUT.framework
Result: "/Users/xxx/Desktop/Triangles/Triangles/main.cpp:10:10: 'vgl.h' file not found"
Edit 2: Added the vgh translation unit and LoadShaders translation unit, also added libFreeGlut.a and libGlew32.a to my projects compilation/linking. Moved all of the OpenGL Book's Include contents to my projects source directory. Had to change several include statements to use quoted includes instead of angled includes. It feels like this is closer to working but it is unable to find LoadShader.h. Note that the translation unit in the OpenGL download is called LoadShaders (plural). Changing triangles.cpp to reference LoadShaders.h fixed the include problem but the contents of that translation unit don't seem to match the signatures of whats being called from triangles.cpp.
There are some issues with the source and with the files in oglpg-8th-edition.zip:
triangles.cpp uses non-standard GLUT functions that aren't included in glut, and instead are only part of the freeglut implementation (glutInitContextVersion and glutInitContextProfile). freeglut doesn't really support OS X and building it instead relies on additional X11 support. Instead of telling you how to do this I'm just going to modify the source to build with OS X's GLUT framework.
The code depends on glew, and the book's source download apparently doesn't include a binary you can use, so you'll need to build it for yourself.
Build GLEW with the following commands:
git clone git://git.code.sf.net/p/glew/code glew
cd glew
make extensions
make
Now:
Create a C++ command line Xcode project
Set the executable to link with the OpenGL and GLUT frameworks and the glew dylib you just built.
Modify the project "Header Search Paths" to include the location of the glew headers for the library you built, followed by the path to oglpg-8th-edition/include
Add oglpg-8th-edition/lib/LoadShaders.cpp to your xcode project
Paste the triangles.cpp source into the main.cpp of your Xcode project
Modify the source: replace #include "vgl.h" with:
#include <GL/glew.h>
#include <OpenGL/gl3.h>
#include <GLUT/glut.h>
#define BUFFER_OFFSET(x) ((const void*) (x))
Also make sure that the typos in the version of triangle.cpp that you include in your question are fixed: You include "LoadShader.h" when it should be "LoadShaders.h", and LoadShaders(*shaders); should be LoadShaders(shaders). (The code printed in my copy of the book doesn't contain these errors.)
Delete the calls to glutInitContextVersion and glutInitContextProfile.
Change the parameter to glutInitDisplayMode to GLUT_RGBA | GLUT_3_2_CORE_PROFILE
At this point the code builds, links, and runs, however running the program displays a black window for me instead of the expected triangles.
about fixing the black window issue as mentioned in Matthew and Bames53 comments
Follow bames53's answer
Define shader as string
const char *pTriangleVert =
"#version 410 core\n\
layout(location = 0) in vec4 vPosition;\n\
void\n\
main()\n\
{\n\
gl_Position= vPosition;\n\
}";
const char *pTriangleFrag =
"#version 410 core\n\
out vec4 fColor;\n\
void\n\
main()\n\
{\n\
fColor = vec4(0.0, 0.0, 1.0, 1.0);\n\
}";
OpenGl 4.1 supported on my iMac so i change version into 410
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, pTriangleVert},
{ GL_FRAGMENT_SHADER, pTriangleFrag },
{ GL_NONE, NULL }
};
Modify the ShaderInfo struct slightly
change
typedef struct {
GLenum type;
const char* filename;
GLuint shader;
} ShaderInfo;
into
typedef struct {
GLenum type;
const char* source;
GLuint shader;
} ShaderInfo;
Modify loadShader function slightly
comment the code about reading shader from file
/*
const GLchar* source = ReadShader( entry->filename );
if ( source == NULL ) {
for ( entry = shaders; entry->type != GL_NONE; ++entry ) {
glDeleteShader( entry->shader );
entry->shader = 0;
}
return 0;
}
glShaderSource( shader, 1, &source, NULL );
delete [] source;*/
into
glShaderSource(shader, 1, &entry->source, NULL);
you'd better turning on DEBUG in case some shader compiling errors
you can use example from this link. It's almost the same. It uses glfw instead of glut.
http://www.tomdalling.com/blog/modern-opengl/01-getting-started-in-xcode-and-visual-cpp/
/*
main
Copyright 2012 Thomas Dalling - http://tomdalling.com/
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
//#include "platform.hpp"
// third-party libraries
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
// standard C++ libraries
#include <cassert>
#include <iostream>
#include <stdexcept>
#include <cmath>
// tdogl classes
#include "Program.h"
// constants
const glm::vec2 SCREEN_SIZE(800, 600);
// globals
GLFWwindow* gWindow = NULL;
tdogl::Program* gProgram = NULL;
GLuint gVAO = 0;
GLuint gVBO = 0;
// loads the vertex shader and fragment shader, and links them to make the global gProgram
static void LoadShaders() {
std::vector<tdogl::Shader> shaders;
shaders.push_back(tdogl::Shader::shaderFromFile("vertex-shader.txt", GL_VERTEX_SHADER));
shaders.push_back(tdogl::Shader::shaderFromFile("fragment-shader.txt", GL_FRAGMENT_SHADER));
gProgram = new tdogl::Program(shaders);
}
// loads a triangle into the VAO global
static void LoadTriangle() {
// make and bind the VAO
glGenVertexArrays(1, &gVAO);
glBindVertexArray(gVAO);
// make and bind the VBO
glGenBuffers(1, &gVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
// Put the three triangle verticies into the VBO
GLfloat vertexData[] = {
// X Y Z
0.0f, 0.8f, 0.0f,
-0.8f,-0.8f, 0.0f,
0.8f,-0.8f, 0.0f,
};
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
// connect the xyz to the "vert" attribute of the vertex shader
glEnableVertexAttribAxrray(gProgram->attrib("vert"));
glVertexAttribPointer(gProgram->attrib("vert"), 3, GL_FLOAT, GL_FALSE, 0, NULL);
// unbind the VBO and VAO
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// draws a single frame
static void Render() {
// clear everything
glClearColor(0, 0, 0, 1); // black
glClear(GL_COLOR_BUFFER_BIT);
// bind the program (the shaders)
glUseProgram(gProgram->object());
// bind the VAO (the triangle)
glBindVertexArray(gVAO);
// draw the VAO
glDrawArrays(GL_TRIANGLES, 0, 3);
// unbind the VAO
glBindVertexArray(0);
// unbind the program
glUseProgram(0);
// swap the display buffers (displays what was just drawn)
glfwSwapBuffers(gWindow);
}
void OnError(int errorCode, const char* msg) {
throw std::runtime_error(msg);
}
// the program starts here
void AppMain() {
// initialise GLFW
glfwSetErrorCallback(OnError);
if(!glfwInit())
throw std::runtime_error("glfwInit failed");
// open a window with GLFW
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
gWindow = glfwCreateWindow((int)SCREEN_SIZE.x, (int)SCREEN_SIZE.y, "OpenGL Tutorial", NULL, NULL);
if(!gWindow)
throw std::runtime_error("glfwCreateWindow failed. Can your hardware handle OpenGL 3.2?");
// GLFW settings
glfwMakeContextCurrent(gWindow);
// initialise GLEW
glewExperimental = GL_TRUE; //stops glew crashing on OSX :-/
if(glewInit() != GLEW_OK)
throw std::runtime_error("glewInit failed");
// print out some info about the graphics drivers
std::cout << "OpenGL version: " << glGetString(GL_VERSION) << std::endl;
std::cout << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
std::cout << "Vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "Renderer: " << glGetString(GL_RENDERER) << std::endl;
// make sure OpenGL version 3.2 API is available
if(!GLEW_VERSION_3_2)
throw std::runtime_error("OpenGL 3.2 API is not available.");
// load vertex and fragment shaders into opengl
LoadShaders();
// create buffer and fill it with the points of the triangle
LoadTriangle();
// run while the window is open
while(!glfwWindowShouldClose(gWindow)){
// process pending events
glfwPollEvents();
// draw one frame
Render();
}
// clean up and exit
glfwTerminate();
}
int main(int argc, char *argv[]) {
try {
AppMain();
} catch (const std::exception& e){
std::cerr << "ERROR: " << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
I have adapted the project for MAC here:
https://github.com/badousuan/openGLredBook9th
The project can build successfully and most demo can run as expected. However the original code is based on openGL 4.5,while MAC only support version 4.1,some new API calls may fail. If some target not work well, you should consider this version issue and make some adaptation
I use the code from this tutorial: http://antongerdelan.net/opengl/hellotriangle.html, and it works on my mac.
Here is the code I run.
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main() {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow(640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit();
// get version info
const GLubyte* renderer = glGetString(GL_RENDERER); // get renderer string
const GLubyte* version = glGetString(GL_VERSION); // version as a string
printf("Renderer: %s\n", renderer);
printf("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable(GL_DEPTH_TEST); // enable depth-testing
glDepthFunc(GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
GLuint vbo = 0; // vertex buffer object
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), points, GL_STATIC_DRAW);
GLuint vao = 0; // vertex array object
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(0.5, 0.0, 0.5, 1.0);"
"}";
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vertex_shader, NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fs, 1, &fragment_shader, NULL);
glCompileShader(fs);
GLuint shader_programme = glCreateProgram();
glAttachShader(shader_programme, fs);
glAttachShader(shader_programme, vs);
glLinkProgram(shader_programme);
while(!glfwWindowShouldClose(window)) {
// wipe the drawing surface clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(shader_programme);
glBindVertexArray(vao);
// draw points 0-3 from the currently bound VAO with current in-use shader
glDrawArrays(GL_TRIANGLES, 0, 3);
// update other events like input handling
glfwPollEvents();
// put the stuff we've been drawing onto the display
glfwSwapBuffers(window);
}
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}

GLSL getting odd values back from my uniform and it seems to be set with the wrong value too

I'm having problems using a uniform in a vertex shader
heres the code
// gcc main.c -o main `pkg-config --libs --cflags glfw3` -lGL -lm
#include <GLFW/glfw3.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
void gluErrorString(const char* why,GLenum errorCode);
void checkShader(GLuint status, GLuint shader, const char* which);
float verts[] = {
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
const char* vertex_shader =
"#version 330\n"
"in vec3 vp;\n"
"uniform float u_time;\n"
"\n"
"void main () {\n"
" vec4 p = vec4(vp, 1.0);\n"
" p.x = p.x + u_time;\n"
" gl_Position = p;\n"
"}";
const char* fragment_shader =
"#version 330\n"
"out vec4 frag_colour;\n"
"void main () {\n"
" frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);\n"
"}";
int main () {
if (!glfwInit ()) {
fprintf (stderr, "ERROR: could not start GLFW3\n");
return 1;
}
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2);
//glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf (stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent (window);
// vert arrays group vert buffers together unlike GLES2 (no vert arrays)
// we *must* have one of these even if we only need 1 vert buffer
GLuint vao = 0;
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
GLuint vbo = 0;
glGenBuffers (1, &vbo);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
// each vert takes 3 float * 4 verts in the fan = 12 floats
glBufferData (GL_ARRAY_BUFFER, 12 * sizeof (float), verts, GL_STATIC_DRAW);
gluErrorString("buffer data",glGetError());
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
// 3 components per vert
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
gluErrorString("attrib pointer",glGetError());
GLuint vs = glCreateShader (GL_VERTEX_SHADER);
glShaderSource (vs, 1, &vertex_shader, NULL);
glCompileShader (vs);
GLint success = 0;
glGetShaderiv(vs, GL_COMPILE_STATUS, &success);
checkShader(success, vs, "Vert Shader");
GLuint fs = glCreateShader (GL_FRAGMENT_SHADER);
glShaderSource (fs, 1, &fragment_shader, NULL);
glCompileShader (fs);
glGetShaderiv(fs, GL_COMPILE_STATUS, &success);
checkShader(success, fs, "Frag Shader");
GLuint shader_program = glCreateProgram ();
glAttachShader (shader_program, fs);
glAttachShader (shader_program, vs);
glLinkProgram (shader_program);
gluErrorString("Link prog",glGetError());
glUseProgram (shader_program);
gluErrorString("use prog",glGetError());
GLuint uniT = glGetUniformLocation(shader_program,"u_time"); // ask gl to assign uniform id
gluErrorString("get uniform location",glGetError());
printf("uniT=%i\n",uniT);
glEnable (GL_DEPTH_TEST);
glDepthFunc (GL_LESS);
float t=0;
while (!glfwWindowShouldClose (window)) {
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gluErrorString("clear",glGetError());
glUseProgram (shader_program);
gluErrorString("use prog",glGetError());
t=t+0.01f;
glUniform1f( uniT, (GLfloat)sin(t));
gluErrorString("set uniform",glGetError());
float val;
glGetUniformfv(shader_program, uniT, &val);
gluErrorString("get uniform",glGetError());
printf("val=%f ",val);
glBindVertexArray (vao);
gluErrorString("bind array",glGetError());
glDrawArrays (GL_TRIANGLE_FAN, 0, 4);
gluErrorString("draw arrays",glGetError());
glfwPollEvents ();
glfwSwapBuffers (window);
gluErrorString("swap buffers",glGetError());
}
glfwTerminate();
return 0;
}
void checkShader(GLuint status, GLuint shader, const char* which) {
if (status==GL_TRUE) return;
int length;
char buffer[1024];
glGetShaderInfoLog(shader, sizeof(buffer), &length, buffer);
fprintf (stderr,"%s Error: %s\n", which,buffer);
glfwTerminate();
exit(-1);
}
struct token_string
{
GLuint Token;
const char *String;
};
static const struct token_string Errors[] = {
{ GL_NO_ERROR, "no error" },
{ GL_INVALID_ENUM, "invalid enumerant" },
{ GL_INVALID_VALUE, "invalid value" },
{ GL_INVALID_OPERATION, "invalid operation" },
{ GL_STACK_OVERFLOW, "stack overflow" },
{ GL_STACK_UNDERFLOW, "stack underflow" },
{ GL_OUT_OF_MEMORY, "out of memory" },
{ GL_TABLE_TOO_LARGE, "table too large" },
#ifdef GL_EXT_framebuffer_object
{ GL_INVALID_FRAMEBUFFER_OPERATION_EXT, "invalid framebuffer operation" },
#endif
{ ~0, NULL } /* end of list indicator */
};
void gluErrorString(const char* why,GLenum errorCode)
{
if (errorCode== GL_NO_ERROR) return;
int i;
for (i = 0; Errors[i].String; i++) {
if (Errors[i].Token == errorCode) {
fprintf (stderr,"error: %s - %s\n",why,Errors[i].String);
glfwTerminate();
exit(-1);
}
}
}
When the code runs, the quad flickers as if the uniform is getting junk values, also getting the value of the uniform shows some odd values like 36893488147419103232.000000 where it should be just a simple sine value
The problem with your code is only indirectly related to GL at all - your GL code is OK.
However, you are using modern OpenGL functions without loading the function pointers as an extension. This might work on some platforms, but not at others. MacOS does guarantee that these functions are exported in the system's GL libs. On windows, Microsofts opengl32.dll never contains function beyond GL 1.1 - your code wouldn't link there. On Linux, you're somewhere inbetween. There is only this old Linux OpenGL ABI document, which guarantees that OpenGL 1.2 functions must be exported by the library. In practice, most GL implementation's libs on Linux export everything (but the fact that the function is there does not mean that it is supported). But you should never directly link these functions, because nobody is guaranteeing anything.
However, the story does not end here: You apparently did this on an implementation which does export the symbols. However, you did not include the correct headers. And you have set up your compiler very poorly. In C, it is valid (but poor style) to call a function which has not been declared before. The compiler will asslume that it returns int and that all parameters are ints. In effect, you are calling these functions, but the compiler will convert the arguments to int.
You would have noticed that if you had set up your compiler to produce some warnings, like -Wall on gcc:
a.c: In function ‘main’:
a.c:74: warning: implicit declaration of function ‘glGenVertexArrays’
a.c:75: warning: implicit declaration of function ‘glBindVertexArray’
[...]
However, the code compiles and links, and I can reproduces results you described (I'm using Linux/Nvidia here).
To fix this, you should use a OpenGL Loader Library. For example, I got your code working by using GLEW. All I had to do was adding at the very top at the file
#define GLEW_NO_GLU // because you re-implemented some glu-like functions with a different interface
#include <glew.h>
and calling
glewExperimental=GL_TRUE;
if (glewInit() != GLEW_OK) {
fprintf (stderr, "ERROR: failed to initialize GLEW\n");
glfwTerminate();
return 1;
}
glGetError(); // read away error generated by GLEW, it is broken in core profiles...
The GLEW headers include declarations for all the functions, so no implicit type conversions do occur any more. GLEW might not be the best choice for core profiles, however I just used it because that's the loader I'm most familiar with.

Meaning of "index" parameter in glEnableVertexAttribArray and (possibly) a bug in the OS X OpenGL implementation

1) Do I understand correctly that to draw using vertex arrays or VBOs I need for all my attributes to either call glBindAttribLocation before the shader program linkage or call glGetAttribLocation after the shader program was successfully linked and then use the bound/obtained index in the glVertexAttribPointer and glEnableVertexAttribArray calls?
To be more specific: these three functions - glGetAttribLocation, glVertexAttribPointer and glEnableVertexAttribArray - they all have an input parameter named "index". Is it the same "index" for all the three? And is it the same thing as the one returned by glGetAttribLocation?
If yes:
2) I've been facing a problem on OS X, I described it here: https://stackoverflow.com/questions/28093919/using-default-attribute-location-doesnt-work-on-osx-osx-opengl-bug , but unfortunately didn't get any replies.
The problem is that depending on what attribute locations I bind to my attributes I do or do not see anything on the screen. I only see this behavior on my MacBook Pro with OS X 10.9.5; I've tried running the same code on Linux and Windows and it seems to work on those platforms independently from which locations are my attributes bound to.
Here is a code example (which is supposed to draw a red triangle on the screen) that exhibits the problem:
#include <iostream>
#include <GLFW/glfw3.h>
GLuint global_program_object;
GLint global_position_location;
GLint global_aspect_ratio_location;
GLuint global_buffer_names[1];
int LoadShader(GLenum type, const char *shader_source)
{
GLuint shader;
GLint compiled;
shader = glCreateShader(type);
if (shader == 0)
return 0;
glShaderSource(shader, 1, &shader_source, NULL);
glCompileShader(shader);
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled)
{
GLint info_len = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &info_len);
if (info_len > 1)
{
char* info_log = new char[info_len];
glGetShaderInfoLog(shader, info_len, NULL, info_log);
std::cout << "Error compiling shader" << info_log << std::endl;
delete info_log;
}
glDeleteShader(shader);
return 0;
}
return shader;
}
int InitGL()
{
char vertex_shader_source[] =
"attribute vec4 att_position; \n"
"attribute float dummy;\n"
"uniform float uni_aspect_ratio; \n"
"void main() \n"
" { \n"
" vec4 test = att_position * dummy;\n"
" mat4 mat_projection = \n"
" mat4(1.0 / uni_aspect_ratio, 0.0, 0.0, 0.0, \n"
" 0.0, 1.0, 0.0, 0.0, \n"
" 0.0, 0.0, -1.0, 0.0, \n"
" 0.0, 0.0, 0.0, 1.0); \n"
" gl_Position = att_position; \n"
" gl_Position *= mat_projection; \n"
" } \n";
char fragment_shader_source[] =
"void main() \n"
" { \n"
" gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n"
" } \n";
GLuint vertex_shader;
GLuint fragment_shader;
GLuint program_object;
GLint linked;
vertex_shader = LoadShader(GL_VERTEX_SHADER , vertex_shader_source );
fragment_shader = LoadShader(GL_FRAGMENT_SHADER, fragment_shader_source);
program_object = glCreateProgram();
if(program_object == 0)
return 1;
glAttachShader(program_object, vertex_shader );
glAttachShader(program_object, fragment_shader);
// Here any index except 0 results in observing the black screen
glBindAttribLocation(program_object, 1, "att_position");
glLinkProgram(program_object);
glGetProgramiv(program_object, GL_LINK_STATUS, &linked);
if(!linked)
{
GLint info_len = 0;
glGetProgramiv(program_object, GL_INFO_LOG_LENGTH, &info_len);
if(info_len > 1)
{
char* info_log = new char[info_len];
glGetProgramInfoLog(program_object, info_len, NULL, info_log);
std::cout << "Error linking program" << info_log << std::endl;
delete info_log;
}
glDeleteProgram(program_object);
return 1;
}
global_program_object = program_object;
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glUseProgram(global_program_object);
global_position_location = glGetAttribLocation (global_program_object, "att_position");
global_aspect_ratio_location = glGetUniformLocation(global_program_object, "uni_aspect_ratio");
GLfloat vertices[] = {-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f};
glGenBuffers(1, global_buffer_names);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 9, vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
return 0;
}
void Render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glUseProgram(global_program_object);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glVertexAttribPointer(global_position_location, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(global_position_location);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(global_position_location);
glUseProgram(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void FreeGL()
{
glDeleteBuffers(1, global_buffer_names);
glDeleteProgram(global_program_object);
}
void SetViewport(int width, int height)
{
glViewport(0, 0, width, height);
glUseProgram(global_program_object);
glUniform1f(global_aspect_ratio_location, static_cast<GLfloat>(width) / static_cast<GLfloat>(height));
}
int main(void)
{
GLFWwindow* window;
if (!glfwInit())
return -1;
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
InitGL();
// Double the resolution to correctly draw with Retina display
SetViewport(1280, 960);
while (!glfwWindowShouldClose(window))
{
Render();
glfwSwapBuffers(window);
glfwPollEvents();
}
FreeGL();
glfwTerminate();
return 0;
}
Does this look like a bug to you? Can anyone reproduce it? If it's a bug where should I report it?
P.S.
I've also tried SDL instead of GLFW, the behavior is the same...
The behavior you see is actually correct as per the spec, and MacOSX has something to do with this, but only in a very indirect way.
To answer question 1) first: You are basically correct. With modern GLSL (>=3.30), you can also specifiy the desired index via the layout(location=...) qualifier directly in the shader code, instead of using glBindAttribLocation(), but that is only a side note.
The problem you are facing is that you are using a legacy GL context. You do not specify a desired version, so you will get maximum compatibility to the old way. Now on windows, you are very likely to get a compatibility profile of the highest version supported by the implementation (typically GL3.x or GL4.x on non-ancient GPUs).
However, on OSX, you are limited to at most GL2.1. And this is where the crux lies: your code is invalid in GL2.x. To explain this, I have to go back in GL history. In the beginning, there was the immediate mode, so you did draw by
glBegin(primType);
glColor3f(r,g,b);
glVertex3f(x,y,z);
[...]
glColor3f(r,g,b);
glVertex3f(x,y,z);
glEnd();
Note that the glVertex call is what actually creates a vertex. All other per-vertex attributes are basically some current vertex state which can be set any time, but calling glVertex will take all of those current attributes together with the position to form the vertex which is fed to the pipeline.
Now when vertex arrays were added, we got functions like glVertexPointer(), glColorPointer() and so on, and each attribute array could be enabled or disabled separately via glEnableClientState(). The array-based draw calls are actually defined in terms of the immediate mode in the OpenGL 2.1 specification as glDrawArrays(GLenum mode, GLint first, GLsizei count) being equivalent to
glBegin(mode);
for (i=0; i<count; i++)
ArrayElement(first + i);
glEnd();
with ArrayElement(i) being defined (this one is derived from the wording of theGL 1.5 spec):
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
This definition has some sublte consequence: You must have the GL_VERTEX_ARRAY attribute array enabled, otherwise nothing will be drawn, since no equivalent of glVertex calls are generated.
Now when the generic attributes were added in GL2.0, a special guarantee was made: generic attribute 0 is aliasing the builtin glVertex attribute - so both can be used interchangeably, in immediate mode as well as in arrays. So glVertexAttrib3f(0,x,y,z) "creates" a vertex the same way glVertex3f(x,y,z) would have. And using an array with glEnableVertexAttribArray(0) is as good as glEnableClientState(GL_VERTEX_ARRAY).
In GL 2.1, the ArrayElement(i) function now looks as follows:
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
for (a=1; a<max_attribs; a++) {
if ( generic_attrib_a_enabled )
glVertexAttrib...(a, <i-th value of attrib a> );
}
if ( generic_attrib_0_enabled)
glVertexAttrib...(0, <i-th value of attrib 0> );
else if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
Now this is what happens to you. You absolutely need attribute 0 (or the old GL_VERTEX_ARRAY attribute) to be enabled for this to generate any vertices for the pipeline.
Note that it should be possible in theory to just enable attribute 0, no matter if it is used in the shader or not. You should just make sure that the corresponding attrib pointer pionts to valid memory, to be 100% safe. So you simply could check if your attribute index 0 is used, and if not, just set the same pointer as attribute 0 as you did for your real attribute, and the GL should be happy. But I haven't tried this.
In more modern GL, these requirements are not there anymore, and drawing without attribute 0 will work as intended, and that is what you saw on those other systems. Maybe you should consider switching to modern GL, say >= 3.2 core profile, where the issue will not be present (but you need to update your code a bit, including the shaders).

OpenGL not drawing on Mavericks with GLFW and GLLoadGen

I'm attempting to setup a cross-platform codebase for OpenGL work, and the following code draws just fine on the Windows 7 partition of my hard drive. However, on Mavericks I only get a black screen and can't figure out why. I've tried all the things suggested in the guides and in related questions on here but nothing has worked so far! Hopefully I'm just missing something obvious, as I'm still quite new to OpenGL.
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int main(int argc, char* argv[])
{
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, 1);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
if (!glfwInit())
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", nullptr, nullptr);
if (!window)
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
gl::exts::LoadTest didLoad = gl::sys::LoadFunctions();
if (!didLoad)
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load: %i\n", didLoad.GetNumMissing()); // This is returning 16 on Windows and 82 on Mavericks, however i have no idea how to fix that.
gl::Enable(gl::DEPTH_TEST);
gl::DepthFunc(gl::LESS);
float points[] =
{
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
};
GLuint vbo = 0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, sizeof(points) * sizeof(points[0]), points, gl::STATIC_DRAW);
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, 0, 0, NULL);
const char* vertexShader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragmentShader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 1, 1, 1.0);"
"}";
GLuint vs = gl::CreateShader(gl::VERTEX_SHADER);
gl::ShaderSource(vs, 1, &vertexShader, nullptr);
gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER);
gl::ShaderSource(fs, 1, &fragmentShader, nullptr);
gl::CompileShader(fs);
GLuint shaderProgram = gl::CreateProgram();
gl::AttachShader(shaderProgram, fs);
gl::AttachShader(shaderProgram, vs);
gl::LinkProgram(shaderProgram);
while (!glfwWindowShouldClose(window))
{
gl::ClearColor(0, 0, 0, 0);
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shaderProgram);
gl::BindVertexArray(vao);
gl::DrawArrays(gl::TRIANGLES, 0, 3);
glfwPollEvents();
glfwSwapBuffers(window);
}
glfwTerminate();
return 0;
}
Compiling through Xcode, using a 2013 Macbook Mini, Intel HD Graphics 5000. It's also probably worth noting that the GLLoadGen GetNumMissing() method is returning 82 missing functions on OSX, and I have no idea why that is or how to fix it. GLFW is including gl.h as opposed to gl3.h, but forcing it to include gl3.h by declaring the required macro outputs a warning about including both headers and still nothing draws. Any help or suggestions would be great.
You have to call glfwInit before you call any other GLFW function. Also register an error callback so that get diagnostics why a certain GLFW operation failed. You requested a OpenGL profile not supported by MacOS X Mavericks. But calling glfwInit after setting the window hints resets that selection, hence why you get a window+context, but not the desired profile. Pulling glfwInit in front solves that problem, but now your window+context creation fails due to lack of OS support.
After every openGL call, check to see that there is no error (use glGetError or gl::GetError. With your shaders, you must check to see that they have compiled properly, there may well be errors.So check that as well (glGetShader and glShaderInfoLog). Do the same for the link stage.