GLEW OpenGL Access violation when using glGenVertexArrays - opengl

In my case.. This problem was fixed by updating the driver of the graphics card.
I have searched and found people with the same problem on stackoverflow and the internet. However, the answers aren't solving my problem.
I am using SDL2 and GLEW. When I run the application, I receive an ''Access violation'' error when this function is executed:
glGenVertexArrays(1, &VertexArrayID);
My code:
bool Game::initSDL(char* title, int xpos, int ypos, int width, int height, int flags) {
if(SDL_Init(SDL_INIT_EVERYTHING)>=0) {
Uint32 flags = SDL_WINDOW_SHOWN|SDL_WINDOW_OPENGL;
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION,4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
mainWindow = SDL_CreateWindow(title, xpos, ypos, width, height, flags);
mainGLContext = SDL_GL_CreateContext(mainWindow);
SDL_GL_SetSwapInterval(1);
// Initialize GLEW
glewExperimental = true; // Needed for core profile
GLenum err = glewInit();
if (GLEW_OK != err)
{
/* Problem: glewInit failed, something is seriously wrong. */
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
}
// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
} else {
return false;
}
return true;
}

try adding this, glewExperimental = GL_TRUE;, before glewInit().
glewExperimental = GL_TRUE;
glewInit();

Related

opengl and SDL2 glgenbuffers segmentation fault

I have a problem with opengl and SDL2.
When I want to trigger glGenBuffers, the application crashes.
Does anyone know where I made a mistake?
#define GLEW_STATIC
#include <iostream>
#include <SDL2/SDL.h>
#include <SDL2/SDL_image.h>
#include <GL/glew.h>
using namespace std;
SDL_Window*win;
SDL_Event event;
const Uint8*keystate = SDL_GetKeyboardState(NULL);
SDL_GLContext context;
int main(int argc, char*args[])
{
SDL_Init(SDL_INIT_EVERYTHING);
glewExperimental = GL_TRUE;
glewInit();
win = SDL_CreateWindow("opengl",SDL_WINDOWPOS_CENTERED,SDL_WINDOWPOS_CENTERED,900,800,SDL_WINDOW_OPENGL);
context = SDL_GL_CreateContext(win);
SDL_GL_MakeCurrent(win,context);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
glClearColor( 1.0, 1.0, 1.0, 1.0 );
float positions[6] =
{
-0.5f,-0.5f,
0.0f,0.5f,
0.5f,0.0f
};
GLuint buffer;
glGenBuffers(1,&buffer); //here
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, 6,positions,GL_STATIC_DRAW);
while(true)
{
while(SDL_PollEvent(&event))
{
if(event.type==SDL_QUIT) return 0;
}
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES,0,3);
SDL_GL_SwapWindow(win);
}
}
I really don't know where the problem is, I would really be grateful to have someone help me.
he GLEW library has to be initialized, by glewInit, after the OpenGL context has become current by SDL_GL_MakeCurrent(win,context). See Initializing GLEW.
win = SDL_CreateWindow("opengl",SDL_WINDOWPOS_CENTERED,SDL_WINDOWPOS_CENTERED,
900,800,SDL_WINDOW_OPENGL);
context = SDL_GL_CreateContext(win);
SDL_GL_MakeCurrent(win,context);
if (glewInit() != GLEW_OK)
return 0;
When you do it before, the initialization of glew fails and glewInit doesn't return GLEW_OK.

Getting Exception thrown for glViewport(0, 0, framebufferWidth, framebufferHight);

#include "list.h"
int main()
{
//INIT GLFW
glfwInit();
//CREATE WINDOW
const int WINDOW_WIDTH = 640;
const int WINDOW_HEIGHT = 480;
int framebufferWidth = 0;
int framebufferHight = 0;
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 4);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(WINDOW_WIDTH,WINDOW_HEIGHT,"Title", NULL, NULL);
glfwGetFramebufferSize(window, &framebufferWidth, &framebufferHight);
glViewport(0, 0, framebufferWidth, framebufferHight);
glfwMakeContextCurrent(window);//IMPORTIANT!!
//INIT GLEW (NEEDS WINDOW AND OPENGL CONTEXT)
glewExperimental = GL_TRUE;
>//Error
if (glewInit() != GLEW_OK)
{
std::cout << "ERROR::MAIN.CPP::GLEW_INIT_FAILED" << "\n";
glfwTerminate();
}
//MAIN LOOP
while (glfwWindowShouldClose(window))
{
//UPDATE INPUT ---
//UPDATE ---
//DRAW ---
//Clear
//Draw
//End Draw
}
//END OF PROGAM
glfwTerminate();
return 0;
}
glViewport(0, 0, framebufferWidth, framebufferHight); is giving me
Unhandled exception at >0x00007FF704D6E7D9 in OpenzGL4.exe: 0xC0000005: Access violation reading >location >0x0000000000000348.
when I run it.
For any OpenGL instruction is required a valid and current OpenGL Context. Hence glfwMakeContextCurrent hast to be invoked before any OpneGL instruction:
GLFWwindow* window = glfwCreateWindow(WINDOW_WIDTH,WINDOW_HEIGHT,"Title", NULL, NULL);
glfwMakeContextCurrent(window); // <----- ADD
glfwGetFramebufferSize(window, &framebufferWidth, &framebufferHight);
glViewport(0, 0, framebufferWidth, framebufferHight);
glfwMakeContextCurrent(window); // <----- DELETE
In addition to what Rabbid76 already wrote in his answer, there is another problem in your code:
glViewport(0, 0, framebufferWidth, framebufferHight);
glfwMakeContextCurrent(window);//IMPORTIANT!!
//INIT GLEW (NEEDS WINDOW AND OPENGL CONTEXT)
glewExperimental = GL_TRUE;
>//Error
if (glewInit() != GLEW_OK) {
std::cout << "ERROR::MAIN.CPP::GLEW_INIT_FAILED" << "\n";
glfwTerminate(); }
Since you use the GLEW OpenGL loader, every gl...() Function name is actually remapped as a preprocessor macro to a function pointer, and glewInit will query all those function pointers (and that needs an active OpenGL context, so it can't be done before the glfwMakeContextCurrent). So it is not enough to move the glViewport after the glfwMakeContextCurrent, you must also move it after glewInit.
And there is a second issue with this code: the glewExperimental = GL_TRUE is an evil hack for a bug in GLEW 1.x with OpenGL core profiles, and it's use can't be discouraged enough. Just update to GLEW 2.x or another loader which is compatible with OpenGL core profile contexts.

GLFW - many errors is this code

So my university lecturer gave us this code and it doesn't work.. it never has and no one has been able to get it to work so far.. are we being stupid or is our lecturer giving us broken material? I seriously can't figure this out and need help, i managed to get part way through in fixing many mistakes but after that the issues got harder and harder to solve despite this being '100% working' code.... side note: all the directories are formatted correctly and additional dependencies have all been set up correctly to the best of my knowledge.
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the //GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++ gl::exts::LoadTest didLoad = gl::sys::LoadFunctions(); if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to //the viewer
//i.e. Enable depth testing with smaller depth value //interpreted as being closer gl::Enable(gl::DEPTH_TEST); gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points,
gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in //separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile GLuint vs = gl::CreateShader(gl::VERTEX_SHADER); gl::ShaderSource(vs, 1, &vertex_shader, NULL); gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER); gl::ShaderSource(fs, 1, &fragment_shader, NULL); gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable //GPU shader program
//Create empty program and attach shaders GLuint shader_program = gl::CreateProgram(); gl::AttachShader(shader_program, fs); gl::AttachShader(shader_program, vs); gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}
Your line endings seems to been mangled.
There are multiple lines in your code where actual code was not broken into two lines, so that code is now on the same line as a comment and therefor not being executed. This is your program with proper line endings:
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the
//GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++
gl::exts::LoadTest didLoad = gl::sys::LoadFunctions();
if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to
//the viewer
//i.e. Enable depth testing with smaller depth value
//interpreted as being closer
gl::Enable(gl::DEPTH_TEST);
gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data
GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points, gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in
//separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile
GLuint vs = gl::CreateShader(gl::VERTEX_SHADER);
gl::ShaderSource(vs, 1, &vertex_shader, NULL);
gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER);
gl::ShaderSource(fs, 1, &fragment_shader, NULL);
gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable
//GPU shader program
//Create empty program and attach shaders
GLuint shader_program = gl::CreateProgram();
gl::AttachShader(shader_program, fs);
gl::AttachShader(shader_program, vs);
gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display
glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}

OpenGl glCreateShader makes window white and gives an error

When I'm trying to create shader with glCreateShader and running my program, the window i just made goes white and gives me an error:
Exception thrown at 0x03D312F0 (atioglxx.dll) in OpenGl.exe: 0xC0000005: Access violation reading location 0x73553A43.
Does anyone know, why is this happening?
Here's my code
#include <GL/glew.h>
#include <GLFW\glfw3.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
float speed = 0.005;
float edge = 3.0;
float move = 0.0;
float zamik = 0.1;
GLFWwindow *window;
// initialize GLFW
if (!glfwInit())
{
return -1;
}
// create a window mode and its OpenGl context
window = glfwCreateWindow(640, 480, "Window", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
//make the windows contex current
glfwMakeContextCurrent(window);
glewInit();
GLenum err = glewInit();
if (GLEW_OK != err)
{
/* Problem: glewInit failed, something is seriously wrong. */
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
}
GLfloat verts[] =
{
0.0f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f
};
/*GLfloat color[] =
{
255, 0, 0,
100, 255, 0,
0, 0, 255,
255, 255, 255
};*/
GLuint vertexShader;
vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, 1, "C:\Users\Ghost.corp\Documents\Visual Studio 2015\Projects\OpenGl\OpenGl\VertexShader.shdVertx", NULL);
//glCompileShader(vertexShader);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_PROFILE, 0);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
glfwWindowHint(GLFW_SAMPLES, 4);
//loop
while (!glfwWindowShouldClose(window))
{
//clears our screen
glClear(GL_COLOR_BUFFER_BIT);
//render opengl content
glEnableClientState(GL_VERTEX_ARRAY);
//glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, verts);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 3);
//glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
//sweap front and back buffers
glfwSwapBuffers(window);
glfwPollEvents();
//glColorPointer(4, GL_FLOAT, 0, color);
}
glfwTerminate();
return -1;
}
Concentrate on the glCreateShader if I'm passing parameters wrong. Else i dont know.
The glCreateShader looks fine. But for the next line, you should read the shader file first. glShaderSource expects the shader content, and not a path. An example would be:
char *vs_source =
"attribute vec3 pos;\n"
"void main() {\n"
" gl_Position = vec4(pos,1.0);\n"
"}\n";
glShaderSource(vertexShader, 1, (const GLchar**)&vs_source, NULL);
So, first read the file and put the contents into a string, and then pass it to this function.
Loading a file in C, is a bit confusing, so you need a function like this one:
int load_file(const char *filename, char **result)
{
int size = 0;
FILE *f = fopen(filename, "rb");
if (f == NULL)
{
*result = NULL;
return -1; // -1 means file opening fail
}
fseek(f, 0, SEEK_END);
size = ftell(f);
fseek(f, 0, SEEK_SET);
*result = (char *)malloc(size+1);
if (size != fread(*result, sizeof(char), size, f))
{
free(*result);
return -2; // -2 means file reading fail
}
fclose(f);
(*result)[size] = 0;
return size;
}
And then, you can use it in this way:
char *vs_source;
int size;
size = load_file("C:\\Users\\Ghost.corp\\Documents\\Visual Studio 2015\\Projects\\OpenGl\\OpenGl\\VertexShader.shdVertx", &vs_source);
glShaderSource(vertexShader, 1, (const GLchar**)&vs_source, NULL);
You may check the returned size to see there was no problem loading the file. Also, as you are using C, you need to keep in mind freeing this new memory created is on you. If you don't free it, you'll have memory leak.
Resources:
Loading files function is taken from here.

OpenGL not drawing on Mavericks with GLFW and GLLoadGen

I'm attempting to setup a cross-platform codebase for OpenGL work, and the following code draws just fine on the Windows 7 partition of my hard drive. However, on Mavericks I only get a black screen and can't figure out why. I've tried all the things suggested in the guides and in related questions on here but nothing has worked so far! Hopefully I'm just missing something obvious, as I'm still quite new to OpenGL.
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int main(int argc, char* argv[])
{
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, 1);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
if (!glfwInit())
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", nullptr, nullptr);
if (!window)
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
gl::exts::LoadTest didLoad = gl::sys::LoadFunctions();
if (!didLoad)
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load: %i\n", didLoad.GetNumMissing()); // This is returning 16 on Windows and 82 on Mavericks, however i have no idea how to fix that.
gl::Enable(gl::DEPTH_TEST);
gl::DepthFunc(gl::LESS);
float points[] =
{
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
};
GLuint vbo = 0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, sizeof(points) * sizeof(points[0]), points, gl::STATIC_DRAW);
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, 0, 0, NULL);
const char* vertexShader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragmentShader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 1, 1, 1.0);"
"}";
GLuint vs = gl::CreateShader(gl::VERTEX_SHADER);
gl::ShaderSource(vs, 1, &vertexShader, nullptr);
gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER);
gl::ShaderSource(fs, 1, &fragmentShader, nullptr);
gl::CompileShader(fs);
GLuint shaderProgram = gl::CreateProgram();
gl::AttachShader(shaderProgram, fs);
gl::AttachShader(shaderProgram, vs);
gl::LinkProgram(shaderProgram);
while (!glfwWindowShouldClose(window))
{
gl::ClearColor(0, 0, 0, 0);
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shaderProgram);
gl::BindVertexArray(vao);
gl::DrawArrays(gl::TRIANGLES, 0, 3);
glfwPollEvents();
glfwSwapBuffers(window);
}
glfwTerminate();
return 0;
}
Compiling through Xcode, using a 2013 Macbook Mini, Intel HD Graphics 5000. It's also probably worth noting that the GLLoadGen GetNumMissing() method is returning 82 missing functions on OSX, and I have no idea why that is or how to fix it. GLFW is including gl.h as opposed to gl3.h, but forcing it to include gl3.h by declaring the required macro outputs a warning about including both headers and still nothing draws. Any help or suggestions would be great.
You have to call glfwInit before you call any other GLFW function. Also register an error callback so that get diagnostics why a certain GLFW operation failed. You requested a OpenGL profile not supported by MacOS X Mavericks. But calling glfwInit after setting the window hints resets that selection, hence why you get a window+context, but not the desired profile. Pulling glfwInit in front solves that problem, but now your window+context creation fails due to lack of OS support.
After every openGL call, check to see that there is no error (use glGetError or gl::GetError. With your shaders, you must check to see that they have compiled properly, there may well be errors.So check that as well (glGetShader and glShaderInfoLog). Do the same for the link stage.