Why won't CG shaders work with GL 3.2? - opengl

I've tried everything to get OpenGL 3.2 to render with CG shaders in my game engine but I have had no luck. So I decided to make a bare minimal project but still shaders won't work. In theory my test project should just render a red triangle but it is white because the shader is not doing anything.
I'll post the code here:
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <string>
#include <GL/glew.h>
#include <Cg/cg.h>
#include <Cg/cgGL.h>
#include <SDL2/SDL.h>
int main()
{
SDL_Window *mainwindow;
SDL_GLContext maincontext;
SDL_Init(SDL_INIT_VIDEO);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
mainwindow = SDL_CreateWindow("Test", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 512, 512, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
maincontext = SDL_GL_CreateContext(mainwindow);
glewExperimental = GL_TRUE;
glewInit();
_CGcontext* cgcontext;
cgcontext = cgCreateContext();
cgGLRegisterStates(cgcontext);
CGerror error;
CGeffect effect;
const char* string;
std::string shader;
shader =
"struct VS_INPUT"
"{"
" float3 pos : ATTR0;"
"};"
"struct FS_INPUT"
"{"
" float4 pos : POSITION;"
" float2 tex : TEXCOORD0;"
"};"
"struct FS_OUTPUT"
"{"
" float4 color : COLOR;"
"};"
"FS_INPUT VS( VS_INPUT In )"
"{"
" FS_INPUT Out;"
" Out.pos = float4( In.pos, 1.0f );"
" Out.tex = float2( 0.0f, 0.0f );"
" return Out;"
"}"
"FS_OUTPUT FS( FS_INPUT In )"
"{"
" FS_OUTPUT Out;"
" Out.color = float4(1.0f, 0.0f, 0.0f, 1.0f);"
" return Out;"
"}"
"technique t0"
"{"
" pass p0"
" {"
" VertexProgram = compile gp4vp VS();"
" FragmentProgram = compile gp4fp FS();"
" }"
"}";
effect = cgCreateEffect(cgcontext, shader.c_str(), NULL);
error = cgGetError();
if(error)
{
string = cgGetLastListing(cgcontext);
fprintf(stderr, "Shader compiler: %s\n", string);
}
glClearColor ( 0.0, 0.0, 1.0, 1.0 );
glClear ( GL_COLOR_BUFFER_BIT );
float* vert = new float[9];
vert[0] = 0.0; vert[1] = 0.5; vert[2] =-1.0;
vert[3] =-1.0; vert[4] =-0.5; vert[5] =-1.0;
vert[6] = 1.0; vert[7] =-0.5; vert[8]= -1.0;
unsigned int m_vaoID;
unsigned int m_vboID;
glGenVertexArrays(1, &m_vaoID);
glBindVertexArray(m_vaoID);
glGenBuffers(1, &m_vboID);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(GLfloat), vert, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
CGtechnique tech = cgGetFirstTechnique( effect );
CGpass pass = cgGetFirstPass(tech);
while (pass)
{
cgSetPassState(pass);
glDrawArrays(GL_TRIANGLES, 0, 3);
cgResetPassState(pass);
pass = cgGetNextPass(pass);
}
glDisableVertexAttribArray( 0 );
glBindVertexArray(0);
delete[] vert;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDeleteBuffers(1, &m_vboID);
glDeleteVertexArrays(1, &m_vaoID);
SDL_GL_SwapWindow(mainwindow);
SDL_Delay(2000);
SDL_GL_DeleteContext(maincontext);
SDL_DestroyWindow(mainwindow);
SDL_Quit();
return 0;
}
What am I doing wrong?

I compiled the code and got the same result. So I added a CG error handler to get a bit more of information:
void errorHandler(CGcontext context, CGerror error, void * appdata) {
fprintf(stderr, "%s\n", cgGetErrorString(error));
}
...
cgSetErrorHandler(&errorHandler, NULL);
When cgSetPassState and cgResetPassState were called I got the following error message:
Technique did not pass validation.
Not really very informative, of course. So I used GLIntercept to trace all OpenGL calls to a log file.
This time, when glewInit was called I got the following error message in the log file:
glGetString(GL_EXTENSIONS)=NULL glGetError() = GL_INVALID_ENUM
According OpenGL documentation, glGetString must not be called with GL_EXTENSIONS, was deprecated in 3.0, and glGetStringi must be used instead.
Finally, I found the issue in the GLEW library: http://sourceforge.net/p/glew/bugs/120/
I removed GLEW dependency and tested with gl3.h (and more recent glcorearb.h). I got the same error, but this time when cgGLRegisterStates was called.
I also tried CG trace.dll, just to get the same error (7939 = 0x1F03 = GL_EXTENSIONS):
glGetString
{
input:
name = 7939
output:
return = NULL
}
Then, I tested OpenGL 3.1 (SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 1);), and found that it was working fine:
glGetString(GL_EXTENSIONS)="GL_AMD_multi_draw_indirec..."
That is, the 3.1 context was compatible with previous OpenGL versions, but 3.2 not.
After a bit of Internet digging I found that you can create this type of compatible OpenGL context with SDL, just adding this line to the code:
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);
IMHO, CG Toolkit needs this type of compatibility profile.

"Cg 3.1 context does not yet support forward-compatible OpenGL contexts!"
Source:
http://3dgep.com/introduction-to-shader-programming-with-cg-3-1/
As the Cg project seems to be abandoned, it's also not likely to happen.

Related

glVertexAttribPointer raise GL_INVALID_OPERATION version 330

I am trying to simplify my example to show exact abnormal case. I am running this example on MacOS 10.14.6, compiling clang LLVM compiler, using GLFW3. Also I am trying to run same example on Windows 10/64 using SFML and got same error, therefore the problem is not in the environment
OpenGL version 4.1 ATI-2.11.20
Shading language version 4.10
Exact code where problem is
glUseProgram(id_shader);
glEnableVertexAttribArray(param_Position);
//HERE IS ERROR "OpenGL ERROR: 0x00000502 GL_INVALID_OPERATION" RAISED
glVertexAttribPointer(param_Position, 3, GL_FLOAT, (GLboolean) false, 0, vertices);
Here is source full code
#include <stdlib.h>
#include <OpenGL/gl3.h>
#include <GLFW/glfw3.h>
#include "engine/Camera.h"
static const char *get_error_string_by_enum(GLenum err)
{
switch (err) {
case GL_INVALID_ENUM :
return "GL_INVALID_ENUM";
case GL_INVALID_VALUE :
return "GL_INVALID_VALUE";
case GL_INVALID_OPERATION :
return "GL_INVALID_OPERATION";
case GL_STACK_OVERFLOW :
return "GL_STACK_OVERFLOW";
case GL_STACK_UNDERFLOW :
return "GL_STACK_UNDERFLOW";
case GL_OUT_OF_MEMORY :
return "GL_OUT_OF_MEMORY";
#ifdef GL_INVALID_FRAMEBUFFER_OPERATION
case GL_INVALID_FRAMEBUFFER_OPERATION :
return "GL_INVALID_FRAMEBUFFER_OPERATION";
#endif
default: {
return "UNKNOWN";
}
}
}
static void check_gl()
{
char line[300];
GLenum err;
err = glGetError();
if (err != GL_NO_ERROR) {
sprintf(line, "OpenGL ERROR: 0x%.8X %s", err, get_error_string_by_enum(err));
printf("%s\n", line);
exit(-1);
}
}
int main()
{
char line[2000];
unsigned int windowWidth = 1024;
unsigned int windowHeight = 1024;
GLFWwindow* window;
//SETUP WINDOW AND CONTEXT
if (!glfwInit()){
fprintf(stdout, "ERROR on glfwInit");
return -1;
}
glfwWindowHint(GLFW_SAMPLES, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
window = glfwCreateWindow(windowWidth, windowHeight, "OpenGL", NULL, NULL);
if (!window)
{
fprintf(stderr, "Unable to create window.");
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
sprintf(line, "OpenGL version %s\n", (const char *) glGetString(GL_VERSION));
fprintf(stdout, line);
sprintf(line, "Shading language version %s\n", (const char *) glGetString(GL_SHADING_LANGUAGE_VERSION));
fprintf(stdout, line);
//SETUP OPENGL
glViewport(0, 0, windowWidth, windowHeight);
glEnable(GL_BLEND);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthMask(true);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//SETUP SHADER PROGRAM
GLint success;
GLuint id_v = glCreateShader(GL_VERTEX_SHADER);
GLuint id_f = glCreateShader(GL_FRAGMENT_SHADER);
const char *vertex_shader_source = "#version 330\n"
"precision mediump float;\n"
"\n"
"in vec4 Position;\n"
"uniform mat4 MVPMatrix;\n"
"\n"
"void main()\n"
"{\n"
"\tgl_Position = MVPMatrix * Position;\n"
"\tgl_PointSize = 10.0;\n"
"}";
const char *fragment_shader_source = "#version 330\n"
"precision mediump float;\n"
"\n"
"uniform vec4 Color;\n"
"out vec4 FragCoord;\n"
"\n"
"void main()\n"
"{\n"
" FragCoord = Color;\n"
"}";
glShaderSource(id_v, 1, &vertex_shader_source, NULL);
glCompileShader(id_v);
glGetShaderiv(id_v, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(id_v, 2000, NULL, line);
fprintf(stderr, line);
exit(-1);
}
glShaderSource(id_f, 1, &fragment_shader_source, NULL);
glCompileShader(id_f);
glGetShaderiv(id_f, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(id_f, 2000, NULL, line);
fprintf(stderr, line);
exit(-1);
}
GLuint id_shader = glCreateProgram();
glAttachShader(id_shader, id_v);
glAttachShader(id_shader, id_f);
glLinkProgram(id_shader);
glGetProgramiv(id_shader, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramInfoLog(id_shader, 2000, NULL, line);
fprintf(stderr, "program link error");
fprintf(stderr, line);
exit(-1);
}
GLuint param_Position = glGetAttribLocation(id_shader, "Position");
GLuint param_MVPMatrix = glGetUniformLocation(id_shader, "MVPMatrix");
GLuint param_Color = glGetUniformLocation(id_shader, "Color");
sprintf(line, "Params: param_Position=%d param_MVPMatrix=%d param_Color=%d\n", param_Position, param_MVPMatrix, param_Color);
fprintf(stdout, line);
//SETUP MATRIX
Camera *c = new Camera();
c->setCameraType(CameraType::PERSPECTIVE);
c->setWorldSize(100, 100);
c->lookFrom(5, 5, 5);
c->lookAt(0, 0, 0);
c->setFOV(100);
c->setUp(0, 0, 1);
c->calc();
c->getResultMatrix().dump();
//SETUP TRIANGLE
float vertices[] = {
0, 0, 0,
1, 0, 0,
1, 1, 0
};
while (!glfwWindowShouldClose(window))
{
//CLEAR FRAME
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
//RENDER TRIANGLE
glUseProgram(id_shader);
glUniformMatrix4fv(param_MVPMatrix, 1, (GLboolean) false, c->getResultMatrix().data);
glUniform4f(param_Color, 1.0f, 0.5f, 0.0f, 1.0f);
glEnableVertexAttribArray(param_Position);
check_gl();
//HERE IS ERROR "OpenGL ERROR: 0x00000502 GL_INVALID_OPERATION" RAISED
glVertexAttribPointer(param_Position, 3, GL_FLOAT, (GLboolean) false, 0, vertices);
check_gl();
glDrawArrays(GL_TRIANGLES, 0, 3);
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Before line
glVertexAttribPointer(param_Position, 3, GL_FLOAT, (GLboolean) false, 0, vertices);
No errors detected before and just after this line GL_INVALID_OPERATION raised.
Program output is:
Environment versions:
OpenGL version 4.1 ATI-2.11.20
Shading language version 4.10
Shader param names:
Params: param_Position=0 param_MVPMatrix=1 param_Color=0
Matrix
-0.593333 -0.342561 -0.577350 -0.577350
0.593333 -0.342561 -0.577350 -0.577350
0.000000 0.685122 -0.577350 -0.577350
0.000000 0.000000 8.640252 8.660253
Error
OpenGL ERROR: 0x00000502 GL_INVALID_OPERATION
I already spent few days on that problem and have no more idea to put it on. I would be grateful for any advice and clarifications.
P.S. Here is glfwinfo output for my system
/glfwinfo -m3 -n2 --profile=compat
GLFW header version: 3.4.0
GLFW library version: 3.4.0
GLFW library version string: "3.4.0 Cocoa NSGL EGL OSMesa"
OpenGL context version string: "4.1 ATI-2.11.20"
OpenGL context version parsed by GLFW: 4.1.0
OpenGL context flags (0x00000001): forward-compatible
OpenGL context flags parsed by GLFW: forward-compatible
OpenGL profile mask (0x00000001): core
OpenGL profile mask parsed by GLFW: core
OpenGL context renderer string: "AMD Radeon R9 M370X OpenGL Engine"
OpenGL context vendor string: "ATI Technologies Inc."
OpenGL context shading language version: "4.10"
OpenGL framebuffer:
red: 8 green: 8 blue: 8 alpha: 8 depth: 24 stencil: 8
samples: 0 sample buffers: 0
Vulkan loader: missing
Since you use a core profile context (GLFW_OPENGL_CORE_PROFILE), the default Vertex Array Object 0 is not valid further you've to use Vertex Buffer Object.
When glVertexAttribPointer is called, then the vertex array specification is stored in the state vector of the currently bound vertex array object. The buffer which is currently bound to the target ARRAY_BUFFER is associated to the attribute and the name (value) of the object is stored in the state vector of the VAO.
In compatibility profile there exists the default vertex array object 0, which can be used at any time but this is not valid in a core profile context. Further it is not necessary in a compatibility profile to use a VBO, the last parameter of glVertexAttribPointer can be a pointer to the vertex data.
The easiest solution is to switch to a compatibility profile GLFW_OPENGL_COMPAT_PROFILE:
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
If you don't want to do that or your system doesn't provide that, then you've to read about Vertex Specification. Create a vertex buffer object a vertex array object before the program loop:
// vertex buffer object
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// vertex array object
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo); // this is not necessary, because "vbo" is still bound
glVertexAttribPointer(param_Position, 3, GL_FLOAT, (GLboolean) false, 0, nullptr);
// the following is not necessary, you can let them bound
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
And use it in the loop to draw the mesh:
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 3);

Getting black screen when using glCreateVertexArrays

I am learning opengl right now. I have bought a book called OpenGL Superbible. But I couldn't survived to properly configure the environment. I use GLFW 3.2 as windowing toolkit (if that is what it is called) and GLEW 2.0.
I am trying to compile and use shaders to draw on screen. According to the book this should draw a triangle on screen. But it doesn't. Instead, it shows the clear background color that is set by glClearColor.
This is the Code:
#include <iostream>
#include <GLFW\glfw3.h>
#include <GL\glew.h>
GLuint CompileShaders();
int main(void) {
// Initialise GLFW
if (!glfwInit()) {
fprintf(stderr, "Failed to initialize GLFW\n");
getchar();
return -1;
}
// Open a window and create its OpenGL context
GLFWwindow *window;
window = glfwCreateWindow(1024, 768, "Tutorial 01", NULL, NULL);
if (window == NULL) {
fprintf(stderr, "Failed to open GLFW window. If you have an Intel GPU, "
"they are not 3.3 compatible. Try the 2.1 version of the "
"tutorials.\n");
getchar();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// Initialize GLEW
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
getchar();
glfwTerminate();
return -1;
}
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
GLuint RederingProgram = CompileShaders();
GLuint VertexArrayObject;
glCreateVertexArrays(1, &VertexArrayObject);
glBindVertexArray(VertexArrayObject);
int LoopCounter = 0;
do {
// Clear the screen. It's not mentioned before Tutorial 02, but it can cause
// flickering, so it's there nonetheless.
/*const GLfloat red[] = {
(float)sin(LoopCounter++ / 100.0f)*0.5f + 0.5f,
(float)cos(LoopCounter++ / 100.0f)*0.5f + 0.5f,
0.0f, 1.0f
};*/
// glClearBufferfv(GL_COLOR, 0, red);
// Draw nothing, see you in tutorial 2 !
glUseProgram(RederingProgram);
glDrawArrays(GL_TRIANGLES, 0, 3);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
} // Check if the ESC key was pressed or the window was closed
while (glfwGetKey(window, GLFW_KEY_ESCAPE) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0);
// Close OpenGL window and terminate GLFW
glfwTerminate();
return 0;
}
GLuint CompileShaders() {
GLuint VertexShader;
GLuint FragmentShader;
GLuint Program;
static const GLchar *VertexShaderSource[] = {
"#version 450 core "
" "
"\n",
" "
" \n",
"void main(void) "
" "
"\n",
"{ "
" "
" \n",
"const vec4 vertices[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1.0),\n",
" "
"vec4(-0.25, -0.25, 0.5, 1.0),\n",
" "
"vec4(0.25, 0.25, 0.5, 1.0)); \n",
" gl_Position = vertices[gl_VertexID]; \n",
"} "
" "
" \n"};
static const GLchar *FragmentShaderSource[] = {
"#version 450 core "
" "
"\n",
" "
" \n",
"out vec4 color; \n",
" "
" \n",
"void main(void) "
" "
"\n",
"{ "
" "
" \n",
" color = vec4(0.0, 0.8, 1.0, 1.0); \n",
"} "
" "
" \n"};
// Create and compile vertex shader.
VertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(VertexShader, 1, VertexShaderSource, NULL);
glCompileShader(VertexShader);
// Create and compile fragment shader.
FragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(FragmentShader, 1, FragmentShaderSource, NULL);
glCompileShader(FragmentShader);
// Create program, attach shaders to it, and link it
Program = glCreateProgram();
glAttachShader(Program, VertexShader);
glAttachShader(Program, FragmentShader);
glLinkProgram(Program);
// Delete the shaders as the program has them now.
glDeleteShader(FragmentShader);
glDeleteShader(VertexShader);
return Program;
}
I am working in visual studio 2015. I have all the libraries to develop some opengl (I think), but somethig is wrong. Please help me. By the way, glCreateVertexArrays() function is only in Opengl 4.5 and above, I know, since the book is explained in opengl 4.5.
I will go crazy soon because no proper tutorials for beginners. People who have learned this are very ambitious people. I bow before those people.
Your shaders shouldn't compile:
glShaderSource(VertexShader, 1, VertexShaderSource, NULL);
This tells the GL that it should expect an array of 1 GLchar pointers. However, your GLSL code is actually split into several individual strings (note the commas);
static const GLchar *VertexShaderSource[] = {
"...GLSL-code..."
"...GLSL-code..."
"...GLSL-code...", // <- this comma ends the first string vertexShaderSource[0]
"...GLSL-code..." // vertexShaderSource[1] starts here
[...]
There are two possible solutions:
Just remove those commas, so that your array contains of just one element pointing to the whole GLSL source as one string.
Tell the GL the truth about your data:
glShaderSoure(..., sizeof(vertexShaderSource)/sizeof(vertexShaderSource[0]), vertexShaderSource, ,,,)
Apart from that, you should always query the compilation and link status of your shaders and program objects. Also query the shader compilation and program link info logs. They will contain human-readbale messages telling you why the compilation / link did fail.

OpenGL Red Book with Mac OS X

I would like to work through the OpenGL Red Book, The OpenGL Programming Guide, 8th edition, using Xcode on Mac OS X.
I am unable to run the first code example, triangles.cpp. I have tried including the GLUT and GL frameworks that come with Xcode and I have searched around enough to see that I am not likely to figure this out on my own.
Assuming that I have a fresh installation of Mac OS X, and I have freshly installed Xcode with Xcode command-line tools, what are the step-by-step instructions to be able to run triangles.cpp in that environment?
Unlike this question, my preference would be not to use Cocoa, Objective-C or Swift. My preference would be to stay in C++/C only. An answer is only correct if I can follow it step-by-step and end up with a running triangles.cpp program.
My preference is Mac OS X 10.9, however a correct answer can assume 10.9, 10.10 or 10.11.
Thank you.
///////////////////////////////////////////////////////////////////////
//
// triangles.cpp
//
///////////////////////////////////////////////////////////////////////
#include <iostream>
using namespace std;
#include "vgl.h"
#include "LoadShader.h"
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
//---------------------------------------------------------------------
//
// init
//
void
init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, // Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, // Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(*shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
//---------------------------------------------------------------------
//
// display
//
void
display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
//---------------------------------------------------------------------
//
// main
//
int
main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
glewExperimental = GL_TRUE;
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << endl;
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
Edit 1: In response to the first comment, here is the naive effort.
Open Xcode 5.1.1 on Mac OS X 10.9.5
Create a new C++ Command-line project.
Paste over the contents of main.cpp with the contents of triangles.cpp.
Click on the project -> Build Phases -> Link Binary with Libraries
Add OpenGL.framework and GLUT.framework
Result: "/Users/xxx/Desktop/Triangles/Triangles/main.cpp:10:10: 'vgl.h' file not found"
Edit 2: Added the vgh translation unit and LoadShaders translation unit, also added libFreeGlut.a and libGlew32.a to my projects compilation/linking. Moved all of the OpenGL Book's Include contents to my projects source directory. Had to change several include statements to use quoted includes instead of angled includes. It feels like this is closer to working but it is unable to find LoadShader.h. Note that the translation unit in the OpenGL download is called LoadShaders (plural). Changing triangles.cpp to reference LoadShaders.h fixed the include problem but the contents of that translation unit don't seem to match the signatures of whats being called from triangles.cpp.
There are some issues with the source and with the files in oglpg-8th-edition.zip:
triangles.cpp uses non-standard GLUT functions that aren't included in glut, and instead are only part of the freeglut implementation (glutInitContextVersion and glutInitContextProfile). freeglut doesn't really support OS X and building it instead relies on additional X11 support. Instead of telling you how to do this I'm just going to modify the source to build with OS X's GLUT framework.
The code depends on glew, and the book's source download apparently doesn't include a binary you can use, so you'll need to build it for yourself.
Build GLEW with the following commands:
git clone git://git.code.sf.net/p/glew/code glew
cd glew
make extensions
make
Now:
Create a C++ command line Xcode project
Set the executable to link with the OpenGL and GLUT frameworks and the glew dylib you just built.
Modify the project "Header Search Paths" to include the location of the glew headers for the library you built, followed by the path to oglpg-8th-edition/include
Add oglpg-8th-edition/lib/LoadShaders.cpp to your xcode project
Paste the triangles.cpp source into the main.cpp of your Xcode project
Modify the source: replace #include "vgl.h" with:
#include <GL/glew.h>
#include <OpenGL/gl3.h>
#include <GLUT/glut.h>
#define BUFFER_OFFSET(x) ((const void*) (x))
Also make sure that the typos in the version of triangle.cpp that you include in your question are fixed: You include "LoadShader.h" when it should be "LoadShaders.h", and LoadShaders(*shaders); should be LoadShaders(shaders). (The code printed in my copy of the book doesn't contain these errors.)
Delete the calls to glutInitContextVersion and glutInitContextProfile.
Change the parameter to glutInitDisplayMode to GLUT_RGBA | GLUT_3_2_CORE_PROFILE
At this point the code builds, links, and runs, however running the program displays a black window for me instead of the expected triangles.
about fixing the black window issue as mentioned in Matthew and Bames53 comments
Follow bames53's answer
Define shader as string
const char *pTriangleVert =
"#version 410 core\n\
layout(location = 0) in vec4 vPosition;\n\
void\n\
main()\n\
{\n\
gl_Position= vPosition;\n\
}";
const char *pTriangleFrag =
"#version 410 core\n\
out vec4 fColor;\n\
void\n\
main()\n\
{\n\
fColor = vec4(0.0, 0.0, 1.0, 1.0);\n\
}";
OpenGl 4.1 supported on my iMac so i change version into 410
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, pTriangleVert},
{ GL_FRAGMENT_SHADER, pTriangleFrag },
{ GL_NONE, NULL }
};
Modify the ShaderInfo struct slightly
change
typedef struct {
GLenum type;
const char* filename;
GLuint shader;
} ShaderInfo;
into
typedef struct {
GLenum type;
const char* source;
GLuint shader;
} ShaderInfo;
Modify loadShader function slightly
comment the code about reading shader from file
/*
const GLchar* source = ReadShader( entry->filename );
if ( source == NULL ) {
for ( entry = shaders; entry->type != GL_NONE; ++entry ) {
glDeleteShader( entry->shader );
entry->shader = 0;
}
return 0;
}
glShaderSource( shader, 1, &source, NULL );
delete [] source;*/
into
glShaderSource(shader, 1, &entry->source, NULL);
you'd better turning on DEBUG in case some shader compiling errors
you can use example from this link. It's almost the same. It uses glfw instead of glut.
http://www.tomdalling.com/blog/modern-opengl/01-getting-started-in-xcode-and-visual-cpp/
/*
main
Copyright 2012 Thomas Dalling - http://tomdalling.com/
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
//#include "platform.hpp"
// third-party libraries
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
// standard C++ libraries
#include <cassert>
#include <iostream>
#include <stdexcept>
#include <cmath>
// tdogl classes
#include "Program.h"
// constants
const glm::vec2 SCREEN_SIZE(800, 600);
// globals
GLFWwindow* gWindow = NULL;
tdogl::Program* gProgram = NULL;
GLuint gVAO = 0;
GLuint gVBO = 0;
// loads the vertex shader and fragment shader, and links them to make the global gProgram
static void LoadShaders() {
std::vector<tdogl::Shader> shaders;
shaders.push_back(tdogl::Shader::shaderFromFile("vertex-shader.txt", GL_VERTEX_SHADER));
shaders.push_back(tdogl::Shader::shaderFromFile("fragment-shader.txt", GL_FRAGMENT_SHADER));
gProgram = new tdogl::Program(shaders);
}
// loads a triangle into the VAO global
static void LoadTriangle() {
// make and bind the VAO
glGenVertexArrays(1, &gVAO);
glBindVertexArray(gVAO);
// make and bind the VBO
glGenBuffers(1, &gVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
// Put the three triangle verticies into the VBO
GLfloat vertexData[] = {
// X Y Z
0.0f, 0.8f, 0.0f,
-0.8f,-0.8f, 0.0f,
0.8f,-0.8f, 0.0f,
};
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
// connect the xyz to the "vert" attribute of the vertex shader
glEnableVertexAttribAxrray(gProgram->attrib("vert"));
glVertexAttribPointer(gProgram->attrib("vert"), 3, GL_FLOAT, GL_FALSE, 0, NULL);
// unbind the VBO and VAO
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// draws a single frame
static void Render() {
// clear everything
glClearColor(0, 0, 0, 1); // black
glClear(GL_COLOR_BUFFER_BIT);
// bind the program (the shaders)
glUseProgram(gProgram->object());
// bind the VAO (the triangle)
glBindVertexArray(gVAO);
// draw the VAO
glDrawArrays(GL_TRIANGLES, 0, 3);
// unbind the VAO
glBindVertexArray(0);
// unbind the program
glUseProgram(0);
// swap the display buffers (displays what was just drawn)
glfwSwapBuffers(gWindow);
}
void OnError(int errorCode, const char* msg) {
throw std::runtime_error(msg);
}
// the program starts here
void AppMain() {
// initialise GLFW
glfwSetErrorCallback(OnError);
if(!glfwInit())
throw std::runtime_error("glfwInit failed");
// open a window with GLFW
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
gWindow = glfwCreateWindow((int)SCREEN_SIZE.x, (int)SCREEN_SIZE.y, "OpenGL Tutorial", NULL, NULL);
if(!gWindow)
throw std::runtime_error("glfwCreateWindow failed. Can your hardware handle OpenGL 3.2?");
// GLFW settings
glfwMakeContextCurrent(gWindow);
// initialise GLEW
glewExperimental = GL_TRUE; //stops glew crashing on OSX :-/
if(glewInit() != GLEW_OK)
throw std::runtime_error("glewInit failed");
// print out some info about the graphics drivers
std::cout << "OpenGL version: " << glGetString(GL_VERSION) << std::endl;
std::cout << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
std::cout << "Vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "Renderer: " << glGetString(GL_RENDERER) << std::endl;
// make sure OpenGL version 3.2 API is available
if(!GLEW_VERSION_3_2)
throw std::runtime_error("OpenGL 3.2 API is not available.");
// load vertex and fragment shaders into opengl
LoadShaders();
// create buffer and fill it with the points of the triangle
LoadTriangle();
// run while the window is open
while(!glfwWindowShouldClose(gWindow)){
// process pending events
glfwPollEvents();
// draw one frame
Render();
}
// clean up and exit
glfwTerminate();
}
int main(int argc, char *argv[]) {
try {
AppMain();
} catch (const std::exception& e){
std::cerr << "ERROR: " << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
I have adapted the project for MAC here:
https://github.com/badousuan/openGLredBook9th
The project can build successfully and most demo can run as expected. However the original code is based on openGL 4.5,while MAC only support version 4.1,some new API calls may fail. If some target not work well, you should consider this version issue and make some adaptation
I use the code from this tutorial: http://antongerdelan.net/opengl/hellotriangle.html, and it works on my mac.
Here is the code I run.
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main() {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow(640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit();
// get version info
const GLubyte* renderer = glGetString(GL_RENDERER); // get renderer string
const GLubyte* version = glGetString(GL_VERSION); // version as a string
printf("Renderer: %s\n", renderer);
printf("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable(GL_DEPTH_TEST); // enable depth-testing
glDepthFunc(GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
GLuint vbo = 0; // vertex buffer object
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), points, GL_STATIC_DRAW);
GLuint vao = 0; // vertex array object
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(0.5, 0.0, 0.5, 1.0);"
"}";
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vertex_shader, NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fs, 1, &fragment_shader, NULL);
glCompileShader(fs);
GLuint shader_programme = glCreateProgram();
glAttachShader(shader_programme, fs);
glAttachShader(shader_programme, vs);
glLinkProgram(shader_programme);
while(!glfwWindowShouldClose(window)) {
// wipe the drawing surface clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(shader_programme);
glBindVertexArray(vao);
// draw points 0-3 from the currently bound VAO with current in-use shader
glDrawArrays(GL_TRIANGLES, 0, 3);
// update other events like input handling
glfwPollEvents();
// put the stuff we've been drawing onto the display
glfwSwapBuffers(window);
}
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}

Meaning of "index" parameter in glEnableVertexAttribArray and (possibly) a bug in the OS X OpenGL implementation

1) Do I understand correctly that to draw using vertex arrays or VBOs I need for all my attributes to either call glBindAttribLocation before the shader program linkage or call glGetAttribLocation after the shader program was successfully linked and then use the bound/obtained index in the glVertexAttribPointer and glEnableVertexAttribArray calls?
To be more specific: these three functions - glGetAttribLocation, glVertexAttribPointer and glEnableVertexAttribArray - they all have an input parameter named "index". Is it the same "index" for all the three? And is it the same thing as the one returned by glGetAttribLocation?
If yes:
2) I've been facing a problem on OS X, I described it here: https://stackoverflow.com/questions/28093919/using-default-attribute-location-doesnt-work-on-osx-osx-opengl-bug , but unfortunately didn't get any replies.
The problem is that depending on what attribute locations I bind to my attributes I do or do not see anything on the screen. I only see this behavior on my MacBook Pro with OS X 10.9.5; I've tried running the same code on Linux and Windows and it seems to work on those platforms independently from which locations are my attributes bound to.
Here is a code example (which is supposed to draw a red triangle on the screen) that exhibits the problem:
#include <iostream>
#include <GLFW/glfw3.h>
GLuint global_program_object;
GLint global_position_location;
GLint global_aspect_ratio_location;
GLuint global_buffer_names[1];
int LoadShader(GLenum type, const char *shader_source)
{
GLuint shader;
GLint compiled;
shader = glCreateShader(type);
if (shader == 0)
return 0;
glShaderSource(shader, 1, &shader_source, NULL);
glCompileShader(shader);
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled)
{
GLint info_len = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &info_len);
if (info_len > 1)
{
char* info_log = new char[info_len];
glGetShaderInfoLog(shader, info_len, NULL, info_log);
std::cout << "Error compiling shader" << info_log << std::endl;
delete info_log;
}
glDeleteShader(shader);
return 0;
}
return shader;
}
int InitGL()
{
char vertex_shader_source[] =
"attribute vec4 att_position; \n"
"attribute float dummy;\n"
"uniform float uni_aspect_ratio; \n"
"void main() \n"
" { \n"
" vec4 test = att_position * dummy;\n"
" mat4 mat_projection = \n"
" mat4(1.0 / uni_aspect_ratio, 0.0, 0.0, 0.0, \n"
" 0.0, 1.0, 0.0, 0.0, \n"
" 0.0, 0.0, -1.0, 0.0, \n"
" 0.0, 0.0, 0.0, 1.0); \n"
" gl_Position = att_position; \n"
" gl_Position *= mat_projection; \n"
" } \n";
char fragment_shader_source[] =
"void main() \n"
" { \n"
" gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n"
" } \n";
GLuint vertex_shader;
GLuint fragment_shader;
GLuint program_object;
GLint linked;
vertex_shader = LoadShader(GL_VERTEX_SHADER , vertex_shader_source );
fragment_shader = LoadShader(GL_FRAGMENT_SHADER, fragment_shader_source);
program_object = glCreateProgram();
if(program_object == 0)
return 1;
glAttachShader(program_object, vertex_shader );
glAttachShader(program_object, fragment_shader);
// Here any index except 0 results in observing the black screen
glBindAttribLocation(program_object, 1, "att_position");
glLinkProgram(program_object);
glGetProgramiv(program_object, GL_LINK_STATUS, &linked);
if(!linked)
{
GLint info_len = 0;
glGetProgramiv(program_object, GL_INFO_LOG_LENGTH, &info_len);
if(info_len > 1)
{
char* info_log = new char[info_len];
glGetProgramInfoLog(program_object, info_len, NULL, info_log);
std::cout << "Error linking program" << info_log << std::endl;
delete info_log;
}
glDeleteProgram(program_object);
return 1;
}
global_program_object = program_object;
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glUseProgram(global_program_object);
global_position_location = glGetAttribLocation (global_program_object, "att_position");
global_aspect_ratio_location = glGetUniformLocation(global_program_object, "uni_aspect_ratio");
GLfloat vertices[] = {-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f};
glGenBuffers(1, global_buffer_names);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 9, vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
return 0;
}
void Render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glUseProgram(global_program_object);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glVertexAttribPointer(global_position_location, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(global_position_location);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(global_position_location);
glUseProgram(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void FreeGL()
{
glDeleteBuffers(1, global_buffer_names);
glDeleteProgram(global_program_object);
}
void SetViewport(int width, int height)
{
glViewport(0, 0, width, height);
glUseProgram(global_program_object);
glUniform1f(global_aspect_ratio_location, static_cast<GLfloat>(width) / static_cast<GLfloat>(height));
}
int main(void)
{
GLFWwindow* window;
if (!glfwInit())
return -1;
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
InitGL();
// Double the resolution to correctly draw with Retina display
SetViewport(1280, 960);
while (!glfwWindowShouldClose(window))
{
Render();
glfwSwapBuffers(window);
glfwPollEvents();
}
FreeGL();
glfwTerminate();
return 0;
}
Does this look like a bug to you? Can anyone reproduce it? If it's a bug where should I report it?
P.S.
I've also tried SDL instead of GLFW, the behavior is the same...
The behavior you see is actually correct as per the spec, and MacOSX has something to do with this, but only in a very indirect way.
To answer question 1) first: You are basically correct. With modern GLSL (>=3.30), you can also specifiy the desired index via the layout(location=...) qualifier directly in the shader code, instead of using glBindAttribLocation(), but that is only a side note.
The problem you are facing is that you are using a legacy GL context. You do not specify a desired version, so you will get maximum compatibility to the old way. Now on windows, you are very likely to get a compatibility profile of the highest version supported by the implementation (typically GL3.x or GL4.x on non-ancient GPUs).
However, on OSX, you are limited to at most GL2.1. And this is where the crux lies: your code is invalid in GL2.x. To explain this, I have to go back in GL history. In the beginning, there was the immediate mode, so you did draw by
glBegin(primType);
glColor3f(r,g,b);
glVertex3f(x,y,z);
[...]
glColor3f(r,g,b);
glVertex3f(x,y,z);
glEnd();
Note that the glVertex call is what actually creates a vertex. All other per-vertex attributes are basically some current vertex state which can be set any time, but calling glVertex will take all of those current attributes together with the position to form the vertex which is fed to the pipeline.
Now when vertex arrays were added, we got functions like glVertexPointer(), glColorPointer() and so on, and each attribute array could be enabled or disabled separately via glEnableClientState(). The array-based draw calls are actually defined in terms of the immediate mode in the OpenGL 2.1 specification as glDrawArrays(GLenum mode, GLint first, GLsizei count) being equivalent to
glBegin(mode);
for (i=0; i<count; i++)
ArrayElement(first + i);
glEnd();
with ArrayElement(i) being defined (this one is derived from the wording of theGL 1.5 spec):
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
This definition has some sublte consequence: You must have the GL_VERTEX_ARRAY attribute array enabled, otherwise nothing will be drawn, since no equivalent of glVertex calls are generated.
Now when the generic attributes were added in GL2.0, a special guarantee was made: generic attribute 0 is aliasing the builtin glVertex attribute - so both can be used interchangeably, in immediate mode as well as in arrays. So glVertexAttrib3f(0,x,y,z) "creates" a vertex the same way glVertex3f(x,y,z) would have. And using an array with glEnableVertexAttribArray(0) is as good as glEnableClientState(GL_VERTEX_ARRAY).
In GL 2.1, the ArrayElement(i) function now looks as follows:
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
for (a=1; a<max_attribs; a++) {
if ( generic_attrib_a_enabled )
glVertexAttrib...(a, <i-th value of attrib a> );
}
if ( generic_attrib_0_enabled)
glVertexAttrib...(0, <i-th value of attrib 0> );
else if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
Now this is what happens to you. You absolutely need attribute 0 (or the old GL_VERTEX_ARRAY attribute) to be enabled for this to generate any vertices for the pipeline.
Note that it should be possible in theory to just enable attribute 0, no matter if it is used in the shader or not. You should just make sure that the corresponding attrib pointer pionts to valid memory, to be 100% safe. So you simply could check if your attribute index 0 is used, and if not, just set the same pointer as attribute 0 as you did for your real attribute, and the GL should be happy. But I haven't tried this.
In more modern GL, these requirements are not there anymore, and drawing without attribute 0 will work as intended, and that is what you saw on those other systems. Maybe you should consider switching to modern GL, say >= 3.2 core profile, where the issue will not be present (but you need to update your code a bit, including the shaders).

OpenGL not drawing on Mavericks with GLFW and GLLoadGen

I'm attempting to setup a cross-platform codebase for OpenGL work, and the following code draws just fine on the Windows 7 partition of my hard drive. However, on Mavericks I only get a black screen and can't figure out why. I've tried all the things suggested in the guides and in related questions on here but nothing has worked so far! Hopefully I'm just missing something obvious, as I'm still quite new to OpenGL.
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int main(int argc, char* argv[])
{
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, 1);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
if (!glfwInit())
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", nullptr, nullptr);
if (!window)
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
gl::exts::LoadTest didLoad = gl::sys::LoadFunctions();
if (!didLoad)
{
fprintf(stderr, "ERROR");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load: %i\n", didLoad.GetNumMissing()); // This is returning 16 on Windows and 82 on Mavericks, however i have no idea how to fix that.
gl::Enable(gl::DEPTH_TEST);
gl::DepthFunc(gl::LESS);
float points[] =
{
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
};
GLuint vbo = 0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, sizeof(points) * sizeof(points[0]), points, gl::STATIC_DRAW);
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, 0, 0, NULL);
const char* vertexShader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragmentShader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 1, 1, 1.0);"
"}";
GLuint vs = gl::CreateShader(gl::VERTEX_SHADER);
gl::ShaderSource(vs, 1, &vertexShader, nullptr);
gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER);
gl::ShaderSource(fs, 1, &fragmentShader, nullptr);
gl::CompileShader(fs);
GLuint shaderProgram = gl::CreateProgram();
gl::AttachShader(shaderProgram, fs);
gl::AttachShader(shaderProgram, vs);
gl::LinkProgram(shaderProgram);
while (!glfwWindowShouldClose(window))
{
gl::ClearColor(0, 0, 0, 0);
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shaderProgram);
gl::BindVertexArray(vao);
gl::DrawArrays(gl::TRIANGLES, 0, 3);
glfwPollEvents();
glfwSwapBuffers(window);
}
glfwTerminate();
return 0;
}
Compiling through Xcode, using a 2013 Macbook Mini, Intel HD Graphics 5000. It's also probably worth noting that the GLLoadGen GetNumMissing() method is returning 82 missing functions on OSX, and I have no idea why that is or how to fix it. GLFW is including gl.h as opposed to gl3.h, but forcing it to include gl3.h by declaring the required macro outputs a warning about including both headers and still nothing draws. Any help or suggestions would be great.
You have to call glfwInit before you call any other GLFW function. Also register an error callback so that get diagnostics why a certain GLFW operation failed. You requested a OpenGL profile not supported by MacOS X Mavericks. But calling glfwInit after setting the window hints resets that selection, hence why you get a window+context, but not the desired profile. Pulling glfwInit in front solves that problem, but now your window+context creation fails due to lack of OS support.
After every openGL call, check to see that there is no error (use glGetError or gl::GetError. With your shaders, you must check to see that they have compiled properly, there may well be errors.So check that as well (glGetShader and glShaderInfoLog). Do the same for the link stage.