OpenGL not drawing on Mavericks with GLFW and GLLoadGen - c++

I'm attempting to setup a cross-platform codebase for OpenGL work, and the following code draws just fine on the Windows 7 partition of my hard drive. However, on Mavericks I only get a black screen and can't figure out why. I've tried all the things suggested in the guides and in related questions on here but nothing has worked so far! Hopefully I'm just missing something obvious, as I'm still quite new to OpenGL.
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int main(int argc, char* argv[])
{
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, 1);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
if (!glfwInit())
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", nullptr, nullptr);
if (!window)
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
gl::exts::LoadTest didLoad = gl::sys::LoadFunctions();
if (!didLoad)
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load: %i\n", didLoad.GetNumMissing()); // This is returning 16 on Windows and 82 on Mavericks, however i have no idea how to fix that.
gl::Enable(gl::DEPTH_TEST);
gl::DepthFunc(gl::LESS);
float points[] =
{
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
};
GLuint vbo = 0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, sizeof(points) * sizeof(points[0]), points, gl::STATIC_DRAW);
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, 0, 0, NULL);
const char* vertexShader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragmentShader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 1, 1, 1.0);"
"}";
GLuint vs = gl::CreateShader(gl::VERTEX_SHADER);
gl::ShaderSource(vs, 1, &vertexShader, nullptr);
gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER);
gl::ShaderSource(fs, 1, &fragmentShader, nullptr);
gl::CompileShader(fs);
GLuint shaderProgram = gl::CreateProgram();
gl::AttachShader(shaderProgram, fs);
gl::AttachShader(shaderProgram, vs);
gl::LinkProgram(shaderProgram);
while (!glfwWindowShouldClose(window))
{
gl::ClearColor(0, 0, 0, 0);
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shaderProgram);
gl::BindVertexArray(vao);
gl::DrawArrays(gl::TRIANGLES, 0, 3);
glfwPollEvents();
glfwSwapBuffers(window);
}
glfwTerminate();
return 0;
}
Compiling through Xcode, using a 2013 Macbook Mini, Intel HD Graphics 5000. It's also probably worth noting that the GLLoadGen GetNumMissing() method is returning 82 missing functions on OSX, and I have no idea why that is or how to fix it. GLFW is including gl.h as opposed to gl3.h, but forcing it to include gl3.h by declaring the required macro outputs a warning about including both headers and still nothing draws. Any help or suggestions would be great.

You have to call glfwInit before you call any other GLFW function. Also register an error callback so that get diagnostics why a certain GLFW operation failed. You requested a OpenGL profile not supported by MacOS X Mavericks. But calling glfwInit after setting the window hints resets that selection, hence why you get a window+context, but not the desired profile. Pulling glfwInit in front solves that problem, but now your window+context creation fails due to lack of OS support.

After every openGL call, check to see that there is no error (use glGetError or gl::GetError. With your shaders, you must check to see that they have compiled properly, there may well be errors.So check that as well (glGetShader and glShaderInfoLog). Do the same for the link stage.

Related

OpenGL - Why am i getting this error? COMPILATION_FAILED version '330' is not supported

I'm using Xcode (Version 12.0.1) on macOS 10.15.7.
I brew installed the latest versions of glew, glfw, glm with Homebrew, and tried running this simple code snippet, my OpenGL version seems to be incompatible with the shaders used.
I do not want to change the code, any way to downgrade the shaders? (Could it be done with Homebrew as well?)
Here's the code:
#include <iostream>
//#define GLEW_STATIC 1 // This allows linking with Static Library on Windows, without DLL
#include <GL/glew.h> // Include GLEW - OpenGL Extension Wrangler
#include <GLFW/glfw3.h> // GLFW provides a cross-platform interface for creating a graphical context,
// initializing OpenGL and binding inputs
#include <glm/glm.hpp> // GLM is an optimized math library with syntax to similar to OpenGL Shading Language
const char* getVertexShaderSource()
{
// Insert Shaders here ...
// For now, you use a string for your shader code, in the assignment, shaders will be stored in .glsl files
return
"#version 330 core\n"
"layout (location = 0) in vec3 aPos;"
"layout (location = 1) in vec3 aColor;"
"out vec3 vertexColor;"
"void main()"
"{"
" vertexColor = aColor;"
" gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);"
"}";
}
const char* getFragmentShaderSource()
{
return
"#version 330 core\n"
"in vec3 vertexColor;"
"out vec4 FragColor;"
"void main()"
"{"
" FragColor = vec4(vertexColor.r, vertexColor.g, vertexColor.b, 1.0f);"
"}";
}
int compileAndLinkShaders()
{
// compile and link shader program
// return shader program id
// ------------------------------------
// vertex shader
int vertexShader = glCreateShader(GL_VERTEX_SHADER);
const char* vertexShaderSource = getVertexShaderSource();
glShaderSource(vertexShader, 1, &vertexShaderSource, NULL);
glCompileShader(vertexShader);
// check for shader compile errors
int success;
char infoLog[512];
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success);
if (!success)
{
glGetShaderInfoLog(vertexShader, 512, NULL, infoLog);
std::cerr << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << std::endl;
}
// fragment shader
int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
const char* fragmentShaderSource = getFragmentShaderSource();
glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);
glCompileShader(fragmentShader);
// check for shader compile errors
glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success);
if (!success)
{
glGetShaderInfoLog(fragmentShader, 512, NULL, infoLog);
std::cerr << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << std::endl;
}
// link shaders
int shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);
glLinkProgram(shaderProgram);
// check for linking errors
glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramInfoLog(shaderProgram, 512, NULL, infoLog);
std::cerr << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << std::endl;
}
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
return shaderProgram;
//return -1;
}
int createVertexArrayObject()
{
// A vertex is a point on a polygon, it contains positions and other data (eg: colors)
glm::vec3 vertexArray[] = {
glm::vec3( 0.0f, 0.5f, 0.0f), // top center position
glm::vec3( 1.0f, 0.0f, 0.0f), // top center color (red)
glm::vec3( 0.5f, -0.5f, 0.0f), // bottom right
glm::vec3( 0.0f, 1.0f, 0.0f), // bottom right color (green)
glm::vec3(-0.5f, -0.5f, 0.0f), // bottom left
glm::vec3( 0.0f, 0.0f, 1.0f), // bottom left color (blue)
};
// Create a vertex array
GLuint vertexArrayObject;
glGenVertexArrays(1, &vertexArrayObject);
glBindVertexArray(vertexArrayObject);
// Upload Vertex Buffer to the GPU, keep a reference to it (vertexBufferObject)
GLuint vertexBufferObject;
glGenBuffers(1, &vertexBufferObject);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexArray), vertexArray, GL_STATIC_DRAW);
glVertexAttribPointer(0, // attribute 0 matches aPos in Vertex Shader
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
2*sizeof(glm::vec3), // stride - each vertex contain 2 vec3 (position, color)
(void*)0 // array buffer offset
);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, // attribute 1 matches aColor in Vertex Shader
3,
GL_FLOAT,
GL_FALSE,
2*sizeof(glm::vec3),
(void*)sizeof(glm::vec3) // color is offseted a vec3 (comes after position)
);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, 0); // VAO already stored the state we just defined, safe to unbind buffer
glBindVertexArray(0); // Unbind to not modify the VAO
return vertexArrayObject;
}
int main(int argc, char*argv[])
{
// Initialize GLFW and OpenGL version
glfwInit();
#if defined(PLATFORM_OSX)
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
#else
// On windows, we set OpenGL version to 2.1, to support more hardware
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 1);
#endif
// Create Window and rendering context using GLFW, resolution is 800x600
GLFWwindow* window = glfwCreateWindow(800, 600, "Comp371 - Lab 01", NULL, NULL);
if (window == NULL)
{
std::cerr << "Failed to create GLFW window" << std::endl;
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// Initialize GLEW
glewExperimental = true; // Needed for core profile
if (glewInit() != GLEW_OK) {
std::cerr << "Failed to create GLEW" << std::endl;
glfwTerminate();
return -1;
}
// Black background
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// Compile and link shaders here ...
int shaderProgram = compileAndLinkShaders();
// Define and upload geometry to the GPU here ...
int vao = createVertexArrayObject();
// Entering Main Loop
while(!glfwWindowShouldClose(window))
{
// Each frame, reset color of each pixel to glClearColor
glClear(GL_COLOR_BUFFER_BIT);
// TODO - draw rainbow triangle
glUseProgram(shaderProgram);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 3); // 3 vertices, starting at index 0
glBindVertexArray(0);
// End frame
glfwSwapBuffers(window);
// Detect inputs
glfwPollEvents();
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, true);
}
// Shutdown GLFW
glfwTerminate();
return 0;
}
And here's the error i'm getting
2021-01-22 21:23:23.336552-0500 OpenGLtest [27358:306573] Metal API Validation Enabled
2021-01-22 21:23:23.361027-0500 OpenGLtest [27358:306750] flock failed to lock maps file: errno = 35
2021-01-22 21:23:23.361368-0500 OpenGLtest [27358:306750] flock failed to lock maps file: errno = 35
ERROR::SHADER::VERTEX::COMPILATION_FAILED
ERROR: 0:1: '' : version '330' is not supported
ERROR: 0:1: '' : syntax error: #version
ERROR: 0:2: 'layout' : syntax error: syntax error
ERROR::SHADER::FRAGMENT::COMPILATION_FAILED
ERROR: 0:1: '' : version '330' is not supported
ERROR: 0:1: '' : syntax error: #version
ERROR: 0:2: 'f' : syntax error: syntax error
ERROR::SHADER::PROGRAM::LINKING_FAILED
ERROR: One or more attached shaders not successfully compiled
Program ended with exit code: 9
How can this be fixed?
You're trying to compile #version 330 GLSL code on a GL 3.2 context. Since GL 3.2 only supports #versions up to 150 that won't work.
Either re-write your GLSL to target #version 150 or request a GL 3.3 context.
Okay so for some reason there's no way to install an older version of glew using homebrew commands, and after wasting an entire day on this, the error was solved by the following:
Xcode wasn't going into the #if, and was therefore executing the #else:
#if defined(PLATFORM_OSX)
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
#else
// On windows, we set OpenGL version to 2.1, to support more hardware
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 1);
#endif
So I solved this by keeping this code only
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
I couldn't recreate this issue on VS code... Xcode was just being dumb?
You're restricted to OpenGL 2.1 Context when calling OpenGL from a Unix System perspective on MacOSX. In order to get OpenGL3.2 in XCode use GLKit (GLK) Library functions to automatically retrieve a GL 3.2 context. These libraries are all now deprecated past OSX 10.14.

GLFW - many errors is this code

So my university lecturer gave us this code and it doesn't work.. it never has and no one has been able to get it to work so far.. are we being stupid or is our lecturer giving us broken material? I seriously can't figure this out and need help, i managed to get part way through in fixing many mistakes but after that the issues got harder and harder to solve despite this being '100% working' code.... side note: all the directories are formatted correctly and additional dependencies have all been set up correctly to the best of my knowledge.
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the //GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++ gl::exts::LoadTest didLoad = gl::sys::LoadFunctions(); if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to //the viewer
//i.e. Enable depth testing with smaller depth value //interpreted as being closer gl::Enable(gl::DEPTH_TEST); gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points,
gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in //separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile GLuint vs = gl::CreateShader(gl::VERTEX_SHADER); gl::ShaderSource(vs, 1, &vertex_shader, NULL); gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER); gl::ShaderSource(fs, 1, &fragment_shader, NULL); gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable //GPU shader program
//Create empty program and attach shaders GLuint shader_program = gl::CreateProgram(); gl::AttachShader(shader_program, fs); gl::AttachShader(shader_program, vs); gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}
Your line endings seems to been mangled.
There are multiple lines in your code where actual code was not broken into two lines, so that code is now on the same line as a comment and therefor not being executed. This is your program with proper line endings:
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the
//GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++
gl::exts::LoadTest didLoad = gl::sys::LoadFunctions();
if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to
//the viewer
//i.e. Enable depth testing with smaller depth value
//interpreted as being closer
gl::Enable(gl::DEPTH_TEST);
gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data
GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points, gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in
//separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile
GLuint vs = gl::CreateShader(gl::VERTEX_SHADER);
gl::ShaderSource(vs, 1, &vertex_shader, NULL);
gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER);
gl::ShaderSource(fs, 1, &fragment_shader, NULL);
gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable
//GPU shader program
//Create empty program and attach shaders
GLuint shader_program = gl::CreateProgram();
gl::AttachShader(shader_program, fs);
gl::AttachShader(shader_program, vs);
gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display
glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}

Minimal C++ OpenGL Segmentation Fault

I am following the first instalment in Tom Dalling's Modern OpenGL tutorial and when I downloaded his code it worked perfectly. So then I wanted to create my own minimal version to help me understand how it all worked.
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <GLM/glm.hpp>
#include <stdexcept>
GLFWwindow* gWindow = NULL;
GLint gProgram = 0;
GLuint gVertexShader = 0;
GLuint gFragmentShader = 0;
GLuint gVAO = 0;
GLuint gVBO = 0;
int main() {
/** Initialise GLFW **/
if (!glfwInit()){
throw std::runtime_error("GLFW not initialised.");
}
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
gWindow = glfwCreateWindow(800, 600, "OpenGL 3.2", NULL, NULL);
if(!gWindow) {
throw std::runtime_error("glfwCreateWindow failed.");
}
glfwMakeContextCurrent(gWindow);
/** Initialise GLEW **/
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) {
throw std::runtime_error("Glew not initialised.");
}
if (!GLEW_VERSION_3_2) {
throw std::runtime_error("OpenGL 3.2 not supported.");
}
/** Set up shaders **/
gVertexShader = glCreateShader(GL_VERTEX_SHADER); //Segmentation fault here
gFragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(gVertexShader, 1, (const GLchar* const*)R"(
#version 150
in vec3 vertexPos
out vec colour;
void main() {
colour = vertexPos;
gl_Position = vec4(vertexPos, 1);
}
)", NULL);
glShaderSource(gFragmentShader, 1, (const GLchar* const*)R"(
#version 150
in vec3 colour;
out vec4 fragColour;
void main() {
fragColour = colour;
}
)", NULL);
glCompileShader(gVertexShader);
glCompileShader(gFragmentShader);
/** Set up program **/
gProgram = glCreateProgram();
glAttachShader(gProgram, gVertexShader);
glAttachShader(gProgram, gFragmentShader);
glLinkProgram(gProgram);
glDetachShader(gProgram, gVertexShader);
glDetachShader(gProgram, gFragmentShader);
glDeleteShader(gVertexShader);
glDeleteShader(gFragmentShader);
/** Set up VAO and VBO **/
float vertexData[] = {
// X, Y, Z
0.2f, 0.2f, 0.0f,
0.8f, 0.2f, 0.0f,
0.5f, 0.8f, 0.0f,
};
glGenVertexArrays(1, &gVAO);
glBindVertexArray(gVAO);
glGenBuffers(1, &gVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(GL_FLOAT)*9, vertexData, GL_STATIC_DRAW);
glVertexAttribPointer(glGetAttribLocation(gProgram, "vertexPos"), 3, GL_FLOAT, GL_FALSE, 0, NULL);
/** Main Loop **/
while (!glfwWindowShouldClose(gWindow)) {
glfwPollEvents();
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
glDrawArrays(GL_TRIANGLES, 0, 1);
glfwSwapBuffers(gWindow);
}
return 0;
}
This is what I created and it builds correctly. However when running it I get a segmentation fault upon calling glCreateShader.
I have done a lot of searching to find a solution to this and so far I've found two main reasons for this happening:
One is that glewExperimental needs to be set to GL_TRUE, however nothing changed after adding this line.
The second reason was that the OpenGL context was not being set, but I believe the call to glfwMakeContextCurrent covers this point.
From here I have no clue as of what to do, so any tips would be a great help!
EDIT: Thank you all for your help! I eventually managed to get it working and have uploaded the code to pastebin for future reference.
Problem solution
You program crashes not at the line you pointed, but on the first glShaderSource call. You actually pass a pointer to a string as the third argument, casting it to (const GLchar* const*). That's not what glShaderSource expects. It rather expects a pointer to an array of pointers to strings, whose (array of pointer's) length should be passed as the second argument (see documentation).
As a quick-n-dirty fix for your code, I declared the following:
const char *vs_source[] = {
"#version 150\n",
"in vec3 vertexPos\n",
"out vec colour;\n",
"void main() {\n",
" colour = vertexPos;\n",
" gl_Position = vec4(vertexPos, 1);\n",
"}\n"};
const char *fs_source[] = {
"#version 150\n",
"in vec3 colour;\n",
"out vec4 fragColour;\n",
"void main() {\n",
" fragColour = colour;\n",
"}\n"};
and replaced the calls to glShaderSource to
glShaderSource(gVertexShader, 7, (const char**)vs_source, NULL);
glShaderSource(gFragmentShader, 6, (const char**)fs_source, NULL);
Now the program works fine and shows a window as expected.
Hints
Your code did not even compile on my machine (I use gcc 4.8.3 # Linux) because of cast of a string literal to (const GLchar* const*). So try to set more restrictive flags in your compiler options and pay attention at compiler warnings. Moreover, treat warnings as errors.
Use some kind of memory profiler to determine the exact location of crashes. For example, I used Valgrind to localize the problem.

OpenGL Red Book with Mac OS X

I would like to work through the OpenGL Red Book, The OpenGL Programming Guide, 8th edition, using Xcode on Mac OS X.
I am unable to run the first code example, triangles.cpp. I have tried including the GLUT and GL frameworks that come with Xcode and I have searched around enough to see that I am not likely to figure this out on my own.
Assuming that I have a fresh installation of Mac OS X, and I have freshly installed Xcode with Xcode command-line tools, what are the step-by-step instructions to be able to run triangles.cpp in that environment?
Unlike this question, my preference would be not to use Cocoa, Objective-C or Swift. My preference would be to stay in C++/C only. An answer is only correct if I can follow it step-by-step and end up with a running triangles.cpp program.
My preference is Mac OS X 10.9, however a correct answer can assume 10.9, 10.10 or 10.11.
Thank you.
///////////////////////////////////////////////////////////////////////
//
// triangles.cpp
//
///////////////////////////////////////////////////////////////////////
#include <iostream>
using namespace std;
#include "vgl.h"
#include "LoadShader.h"
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
//---------------------------------------------------------------------
//
// init
//
void
init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, // Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, // Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(*shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
//---------------------------------------------------------------------
//
// display
//
void
display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
//---------------------------------------------------------------------
//
// main
//
int
main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
glewExperimental = GL_TRUE;
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << endl;
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
Edit 1: In response to the first comment, here is the naive effort.
Open Xcode 5.1.1 on Mac OS X 10.9.5
Create a new C++ Command-line project.
Paste over the contents of main.cpp with the contents of triangles.cpp.
Click on the project -> Build Phases -> Link Binary with Libraries
Add OpenGL.framework and GLUT.framework
Result: "/Users/xxx/Desktop/Triangles/Triangles/main.cpp:10:10: 'vgl.h' file not found"
Edit 2: Added the vgh translation unit and LoadShaders translation unit, also added libFreeGlut.a and libGlew32.a to my projects compilation/linking. Moved all of the OpenGL Book's Include contents to my projects source directory. Had to change several include statements to use quoted includes instead of angled includes. It feels like this is closer to working but it is unable to find LoadShader.h. Note that the translation unit in the OpenGL download is called LoadShaders (plural). Changing triangles.cpp to reference LoadShaders.h fixed the include problem but the contents of that translation unit don't seem to match the signatures of whats being called from triangles.cpp.
There are some issues with the source and with the files in oglpg-8th-edition.zip:
triangles.cpp uses non-standard GLUT functions that aren't included in glut, and instead are only part of the freeglut implementation (glutInitContextVersion and glutInitContextProfile). freeglut doesn't really support OS X and building it instead relies on additional X11 support. Instead of telling you how to do this I'm just going to modify the source to build with OS X's GLUT framework.
The code depends on glew, and the book's source download apparently doesn't include a binary you can use, so you'll need to build it for yourself.
Build GLEW with the following commands:
git clone git://git.code.sf.net/p/glew/code glew
cd glew
make extensions
make
Now:
Create a C++ command line Xcode project
Set the executable to link with the OpenGL and GLUT frameworks and the glew dylib you just built.
Modify the project "Header Search Paths" to include the location of the glew headers for the library you built, followed by the path to oglpg-8th-edition/include
Add oglpg-8th-edition/lib/LoadShaders.cpp to your xcode project
Paste the triangles.cpp source into the main.cpp of your Xcode project
Modify the source: replace #include "vgl.h" with:
#include <GL/glew.h>
#include <OpenGL/gl3.h>
#include <GLUT/glut.h>
#define BUFFER_OFFSET(x) ((const void*) (x))
Also make sure that the typos in the version of triangle.cpp that you include in your question are fixed: You include "LoadShader.h" when it should be "LoadShaders.h", and LoadShaders(*shaders); should be LoadShaders(shaders). (The code printed in my copy of the book doesn't contain these errors.)
Delete the calls to glutInitContextVersion and glutInitContextProfile.
Change the parameter to glutInitDisplayMode to GLUT_RGBA | GLUT_3_2_CORE_PROFILE
At this point the code builds, links, and runs, however running the program displays a black window for me instead of the expected triangles.
about fixing the black window issue as mentioned in Matthew and Bames53 comments
Follow bames53's answer
Define shader as string
const char *pTriangleVert =
"#version 410 core\n\
layout(location = 0) in vec4 vPosition;\n\
void\n\
main()\n\
{\n\
gl_Position= vPosition;\n\
}";
const char *pTriangleFrag =
"#version 410 core\n\
out vec4 fColor;\n\
void\n\
main()\n\
{\n\
fColor = vec4(0.0, 0.0, 1.0, 1.0);\n\
}";
OpenGl 4.1 supported on my iMac so i change version into 410
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, pTriangleVert},
{ GL_FRAGMENT_SHADER, pTriangleFrag },
{ GL_NONE, NULL }
};
Modify the ShaderInfo struct slightly
change
typedef struct {
GLenum type;
const char* filename;
GLuint shader;
} ShaderInfo;
into
typedef struct {
GLenum type;
const char* source;
GLuint shader;
} ShaderInfo;
Modify loadShader function slightly
comment the code about reading shader from file
/*
const GLchar* source = ReadShader( entry->filename );
if ( source == NULL ) {
for ( entry = shaders; entry->type != GL_NONE; ++entry ) {
glDeleteShader( entry->shader );
entry->shader = 0;
}
return 0;
}
glShaderSource( shader, 1, &source, NULL );
delete [] source;*/
into
glShaderSource(shader, 1, &entry->source, NULL);
you'd better turning on DEBUG in case some shader compiling errors
you can use example from this link. It's almost the same. It uses glfw instead of glut.
http://www.tomdalling.com/blog/modern-opengl/01-getting-started-in-xcode-and-visual-cpp/
/*
main
Copyright 2012 Thomas Dalling - http://tomdalling.com/
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
//#include "platform.hpp"
// third-party libraries
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
// standard C++ libraries
#include <cassert>
#include <iostream>
#include <stdexcept>
#include <cmath>
// tdogl classes
#include "Program.h"
// constants
const glm::vec2 SCREEN_SIZE(800, 600);
// globals
GLFWwindow* gWindow = NULL;
tdogl::Program* gProgram = NULL;
GLuint gVAO = 0;
GLuint gVBO = 0;
// loads the vertex shader and fragment shader, and links them to make the global gProgram
static void LoadShaders() {
std::vector<tdogl::Shader> shaders;
shaders.push_back(tdogl::Shader::shaderFromFile("vertex-shader.txt", GL_VERTEX_SHADER));
shaders.push_back(tdogl::Shader::shaderFromFile("fragment-shader.txt", GL_FRAGMENT_SHADER));
gProgram = new tdogl::Program(shaders);
}
// loads a triangle into the VAO global
static void LoadTriangle() {
// make and bind the VAO
glGenVertexArrays(1, &gVAO);
glBindVertexArray(gVAO);
// make and bind the VBO
glGenBuffers(1, &gVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
// Put the three triangle verticies into the VBO
GLfloat vertexData[] = {
// X Y Z
0.0f, 0.8f, 0.0f,
-0.8f,-0.8f, 0.0f,
0.8f,-0.8f, 0.0f,
};
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
// connect the xyz to the "vert" attribute of the vertex shader
glEnableVertexAttribAxrray(gProgram->attrib("vert"));
glVertexAttribPointer(gProgram->attrib("vert"), 3, GL_FLOAT, GL_FALSE, 0, NULL);
// unbind the VBO and VAO
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// draws a single frame
static void Render() {
// clear everything
glClearColor(0, 0, 0, 1); // black
glClear(GL_COLOR_BUFFER_BIT);
// bind the program (the shaders)
glUseProgram(gProgram->object());
// bind the VAO (the triangle)
glBindVertexArray(gVAO);
// draw the VAO
glDrawArrays(GL_TRIANGLES, 0, 3);
// unbind the VAO
glBindVertexArray(0);
// unbind the program
glUseProgram(0);
// swap the display buffers (displays what was just drawn)
glfwSwapBuffers(gWindow);
}
void OnError(int errorCode, const char* msg) {
throw std::runtime_error(msg);
}
// the program starts here
void AppMain() {
// initialise GLFW
glfwSetErrorCallback(OnError);
if(!glfwInit())
throw std::runtime_error("glfwInit failed");
// open a window with GLFW
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
gWindow = glfwCreateWindow((int)SCREEN_SIZE.x, (int)SCREEN_SIZE.y, "OpenGL Tutorial", NULL, NULL);
if(!gWindow)
throw std::runtime_error("glfwCreateWindow failed. Can your hardware handle OpenGL 3.2?");
// GLFW settings
glfwMakeContextCurrent(gWindow);
// initialise GLEW
glewExperimental = GL_TRUE; //stops glew crashing on OSX :-/
if(glewInit() != GLEW_OK)
throw std::runtime_error("glewInit failed");
// print out some info about the graphics drivers
std::cout << "OpenGL version: " << glGetString(GL_VERSION) << std::endl;
std::cout << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
std::cout << "Vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "Renderer: " << glGetString(GL_RENDERER) << std::endl;
// make sure OpenGL version 3.2 API is available
if(!GLEW_VERSION_3_2)
throw std::runtime_error("OpenGL 3.2 API is not available.");
// load vertex and fragment shaders into opengl
LoadShaders();
// create buffer and fill it with the points of the triangle
LoadTriangle();
// run while the window is open
while(!glfwWindowShouldClose(gWindow)){
// process pending events
glfwPollEvents();
// draw one frame
Render();
}
// clean up and exit
glfwTerminate();
}
int main(int argc, char *argv[]) {
try {
AppMain();
} catch (const std::exception& e){
std::cerr << "ERROR: " << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
I have adapted the project for MAC here:
https://github.com/badousuan/openGLredBook9th
The project can build successfully and most demo can run as expected. However the original code is based on openGL 4.5,while MAC only support version 4.1,some new API calls may fail. If some target not work well, you should consider this version issue and make some adaptation
I use the code from this tutorial: http://antongerdelan.net/opengl/hellotriangle.html, and it works on my mac.
Here is the code I run.
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main() {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow(640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit();
// get version info
const GLubyte* renderer = glGetString(GL_RENDERER); // get renderer string
const GLubyte* version = glGetString(GL_VERSION); // version as a string
printf("Renderer: %s\n", renderer);
printf("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable(GL_DEPTH_TEST); // enable depth-testing
glDepthFunc(GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
GLuint vbo = 0; // vertex buffer object
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), points, GL_STATIC_DRAW);
GLuint vao = 0; // vertex array object
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(0.5, 0.0, 0.5, 1.0);"
"}";
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vertex_shader, NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fs, 1, &fragment_shader, NULL);
glCompileShader(fs);
GLuint shader_programme = glCreateProgram();
glAttachShader(shader_programme, fs);
glAttachShader(shader_programme, vs);
glLinkProgram(shader_programme);
while(!glfwWindowShouldClose(window)) {
// wipe the drawing surface clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(shader_programme);
glBindVertexArray(vao);
// draw points 0-3 from the currently bound VAO with current in-use shader
glDrawArrays(GL_TRIANGLES, 0, 3);
// update other events like input handling
glfwPollEvents();
// put the stuff we've been drawing onto the display
glfwSwapBuffers(window);
}
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}

GLSL getting odd values back from my uniform and it seems to be set with the wrong value too

I'm having problems using a uniform in a vertex shader
heres the code
// gcc main.c -o main `pkg-config --libs --cflags glfw3` -lGL -lm
#include <GLFW/glfw3.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
void gluErrorString(const char* why,GLenum errorCode);
void checkShader(GLuint status, GLuint shader, const char* which);
float verts[] = {
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
const char* vertex_shader =
"#version 330\n"
"in vec3 vp;\n"
"uniform float u_time;\n"
"\n"
"void main () {\n"
" vec4 p = vec4(vp, 1.0);\n"
" p.x = p.x + u_time;\n"
" gl_Position = p;\n"
"}";
const char* fragment_shader =
"#version 330\n"
"out vec4 frag_colour;\n"
"void main () {\n"
" frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);\n"
"}";
int main () {
if (!glfwInit ()) {
fprintf (stderr, "ERROR: could not start GLFW3\n");
return 1;
}
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2);
//glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf (stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent (window);
// vert arrays group vert buffers together unlike GLES2 (no vert arrays)
// we *must* have one of these even if we only need 1 vert buffer
GLuint vao = 0;
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
GLuint vbo = 0;
glGenBuffers (1, &vbo);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
// each vert takes 3 float * 4 verts in the fan = 12 floats
glBufferData (GL_ARRAY_BUFFER, 12 * sizeof (float), verts, GL_STATIC_DRAW);
gluErrorString("buffer data",glGetError());
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
// 3 components per vert
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
gluErrorString("attrib pointer",glGetError());
GLuint vs = glCreateShader (GL_VERTEX_SHADER);
glShaderSource (vs, 1, &vertex_shader, NULL);
glCompileShader (vs);
GLint success = 0;
glGetShaderiv(vs, GL_COMPILE_STATUS, &success);
checkShader(success, vs, "Vert Shader");
GLuint fs = glCreateShader (GL_FRAGMENT_SHADER);
glShaderSource (fs, 1, &fragment_shader, NULL);
glCompileShader (fs);
glGetShaderiv(fs, GL_COMPILE_STATUS, &success);
checkShader(success, fs, "Frag Shader");
GLuint shader_program = glCreateProgram ();
glAttachShader (shader_program, fs);
glAttachShader (shader_program, vs);
glLinkProgram (shader_program);
gluErrorString("Link prog",glGetError());
glUseProgram (shader_program);
gluErrorString("use prog",glGetError());
GLuint uniT = glGetUniformLocation(shader_program,"u_time"); // ask gl to assign uniform id
gluErrorString("get uniform location",glGetError());
printf("uniT=%i\n",uniT);
glEnable (GL_DEPTH_TEST);
glDepthFunc (GL_LESS);
float t=0;
while (!glfwWindowShouldClose (window)) {
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gluErrorString("clear",glGetError());
glUseProgram (shader_program);
gluErrorString("use prog",glGetError());
t=t+0.01f;
glUniform1f( uniT, (GLfloat)sin(t));
gluErrorString("set uniform",glGetError());
float val;
glGetUniformfv(shader_program, uniT, &val);
gluErrorString("get uniform",glGetError());
printf("val=%f ",val);
glBindVertexArray (vao);
gluErrorString("bind array",glGetError());
glDrawArrays (GL_TRIANGLE_FAN, 0, 4);
gluErrorString("draw arrays",glGetError());
glfwPollEvents ();
glfwSwapBuffers (window);
gluErrorString("swap buffers",glGetError());
}
glfwTerminate();
return 0;
}
void checkShader(GLuint status, GLuint shader, const char* which) {
if (status==GL_TRUE) return;
int length;
char buffer[1024];
glGetShaderInfoLog(shader, sizeof(buffer), &length, buffer);
fprintf (stderr,"%s Error: %s\n", which,buffer);
glfwTerminate();
exit(-1);
}
struct token_string
{
GLuint Token;
const char *String;
};
static const struct token_string Errors[] = {
{ GL_NO_ERROR, "no error" },
{ GL_INVALID_ENUM, "invalid enumerant" },
{ GL_INVALID_VALUE, "invalid value" },
{ GL_INVALID_OPERATION, "invalid operation" },
{ GL_STACK_OVERFLOW, "stack overflow" },
{ GL_STACK_UNDERFLOW, "stack underflow" },
{ GL_OUT_OF_MEMORY, "out of memory" },
{ GL_TABLE_TOO_LARGE, "table too large" },
#ifdef GL_EXT_framebuffer_object
{ GL_INVALID_FRAMEBUFFER_OPERATION_EXT, "invalid framebuffer operation" },
#endif
{ ~0, NULL } /* end of list indicator */
};
void gluErrorString(const char* why,GLenum errorCode)
{
if (errorCode== GL_NO_ERROR) return;
int i;
for (i = 0; Errors[i].String; i++) {
if (Errors[i].Token == errorCode) {
fprintf (stderr,"error: %s - %s\n",why,Errors[i].String);
glfwTerminate();
exit(-1);
}
}
}
When the code runs, the quad flickers as if the uniform is getting junk values, also getting the value of the uniform shows some odd values like 36893488147419103232.000000 where it should be just a simple sine value
The problem with your code is only indirectly related to GL at all - your GL code is OK.
However, you are using modern OpenGL functions without loading the function pointers as an extension. This might work on some platforms, but not at others. MacOS does guarantee that these functions are exported in the system's GL libs. On windows, Microsofts opengl32.dll never contains function beyond GL 1.1 - your code wouldn't link there. On Linux, you're somewhere inbetween. There is only this old Linux OpenGL ABI document, which guarantees that OpenGL 1.2 functions must be exported by the library. In practice, most GL implementation's libs on Linux export everything (but the fact that the function is there does not mean that it is supported). But you should never directly link these functions, because nobody is guaranteeing anything.
However, the story does not end here: You apparently did this on an implementation which does export the symbols. However, you did not include the correct headers. And you have set up your compiler very poorly. In C, it is valid (but poor style) to call a function which has not been declared before. The compiler will asslume that it returns int and that all parameters are ints. In effect, you are calling these functions, but the compiler will convert the arguments to int.
You would have noticed that if you had set up your compiler to produce some warnings, like -Wall on gcc:
a.c: In function ‘main’:
a.c:74: warning: implicit declaration of function ‘glGenVertexArrays’
a.c:75: warning: implicit declaration of function ‘glBindVertexArray’
[...]
However, the code compiles and links, and I can reproduces results you described (I'm using Linux/Nvidia here).
To fix this, you should use a OpenGL Loader Library. For example, I got your code working by using GLEW. All I had to do was adding at the very top at the file
#define GLEW_NO_GLU // because you re-implemented some glu-like functions with a different interface
#include <glew.h>
and calling
glewExperimental=GL_TRUE;
if (glewInit() != GLEW_OK) {
fprintf (stderr, "ERROR: failed to initialize GLEW\n");
glfwTerminate();
return 1;
}
glGetError(); // read away error generated by GLEW, it is broken in core profiles...
The GLEW headers include declarations for all the functions, so no implicit type conversions do occur any more. GLEW might not be the best choice for core profiles, however I just used it because that's the loader I'm most familiar with.