OpenGL Programming Guild Eighth Edition, sample programs and 'NumVAOs' - c++

For anyone who has seen my previous questions, after working through the RedBook for Version 2.1, I am now moving on to Version 4.3. (Hoary you say, since many of you have been telling me to do this for ages.)
So, I am deep into Chapter 3, but still haven't got Chapter 1 's example program working.
I have two problems. (Actually 3.) Firstly, it doesn't compile. Okay so that's a problem, but kind of irrelevant considering the next two. Secondly, I don't exactly understand how it works or what it is trying to do, but we will get onto that.
Thirdly, it seems to me that the author of this code is a complete magician. I would suggest all sorts of tinkery-hackery are occurring here. This is most likely to be because of Problem Number 2, the fact that I don't understand what it is trying to do. The guys who wrote this book are, of course, not idiots, but bear with me, I will give an example.
Here is a section of code taken from the top of the main.cpp file. I will include the rest of the file later on, but for now:
enum VAO_IDs {
Triangles,
NumVAOs
};
If I understand correctly, this gives VAO_IDs::Triangles the value of 1, since enum's are zero based. (I hope I am correct here, or it will be embarrassing for me.)
A short while later, you can see this line:
GLuint VAOs[NumVAOs];
Which declares an array of GLuint's, containing 1 GLuint's due to the fact that NumVAOs is equal to 1. Now, firstly, shouldn't it be VAO_IDs::NumVAOs?
And secondly, why on earth has an enum been used in this way? I would never use an enum like that for obvious reasons - cannot have more than one data with the same value, values are not explicitly specified etc...
Am I barking up the right tree here? It just doesn't make sense to do this... VAOs should have been a global, like this, surely? GLuint NumVAOs = 1; This is just abusive to the enum!
In fact, below the statement const GLuint NumVertices = 6; appears. This makes sense, doesn't it, because we can change the value 6 if we wanted to, but we cannot change NumVAOs to 0 for example, because Triangles is already set to 0. (Why is it in an enum? Seriously?)
Anyway, forget the enum's... For now... Okay so I made a big deal out of that and that's the end of the problems... Any further comments I have are in the code now. You can ignore most of the glfw stuff, its essentially the same as glut.
// ----------------------------------------------------------------------------
//
// Triangles - First OpenGL 4.3 Program
//
// ----------------------------------------------------------------------------
#include <cstdlib>
#include <cstdint>
#include <cmath>
#include <stdio.h>
#include <iostream>
//#include <GL/gl.h>
//#include <GL/glu.h>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
/// OpenGL specific
#include "vgl.h"
#include "LoadShaders.h" // These are essentially empty files with some background work going on, nothing declared or defined which is relevant here
enum VAO_IDs {
Triangles,
NumVAOs
};
// So Triangles = 0, NumVAOs = 1
// WHY DO THIS?!
enum Buffer_IDs {
ArrayBuffer,
NumBuffers
};
enum Attrib_IDs {
vPosition = 0
}
// Please, please, please someone explain the enum thing to me, why are they using them instead of global -just- variables.
// (Yeah an enum is a variable, okay, but you know what I mean.)
GLuint VAOs[NumVAOs]; // Compile error: expected initializer before 'VAOs'
GLuint Buffers[NumBuffers]; // NumBuffers is hidden in an enum again, so it NumVAOs
const GLuint NumVertices = 6; // Why do something different here?
// ----------------------------------------------------------------------------
//
// Init
//
// ----------------------------------------------------------------------------
void init()
{
glGenVertexArrays(NumVAOs, VAOs); // Error: VAOs was not declared in this scope
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 },
{ +0.85, -0.90 },
{ -0.90, +0.85 },
{ +0.90, -0.85 },
{ +0.90, +0.90 },
{ -0.85, +0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, nullptr }
};
GLuint program = LoadShaders(shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
// ----------------------------------------------------------------------------
//
// Display
//
// ----------------------------------------------------------------------------
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices); // Error VAOs not declared
glFlush();
}
// ----------------------------------------------------------------------------
//
// Main
//
// ----------------------------------------------------------------------------
void error_handle(int error, const char* description)
{
fputs(description, stderr);
}
void key_handle(GLFWwindow* window, int key, int scancode, int action, int mods)
{
if(key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
}
void handle_exit()
{
}
int main(int argc, char **argv)
{
// Setup exit function
atexit(handle_exit);
// GLFW Window Pointer
GLFWwindow* window;
// Setup error callback
glfwSetErrorCallback(error_handle);
// Init
if(!glfwInit())
{
exit(EXIT_FAILURE);
}
// Setup OpenGL
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_DEPTH_TEST);
// Set GLFW window hints
glfwWindowHint(GLFW_DEPTH_BITS, 32);
glfwWindowHint(GLFW_RED_BITS, 8);
glfwWindowHint(GLFW_GREEN_BITS, 8);
glfwWindowHint(GLFW_BLUE_BITS, 8);
glfwWindowHint(GLFW_ALPHA_BITS, 8);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
//glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, 1);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
// Init GLEW
if(glewInit())
{
printf("GLEW init failure!\n", stderr);
exit(EXIT_FAILURE);
}
// Init OpenGL
init();
// Create Window
window = glfwCreateWindow(800, 600, "Window Title", nullptr, nullptr);
if(!window)
{
glfwTerminate();
return EXIT_FAILURE;
}
// Make current
glfwMakeContextCurrent(window);
// Set key callback
glfwSetKeyCallback(window, key_handle);
// Check OpenGL Version
char* version;
version = (char*)glGetString(GL_VERSION);
printf("OpenGL Application Running, Version: %s\n", version);
// Enter main loop
while(!glfwWindowShouldClose(window))
{
// Event polling
glfwPollEvents();
// OpenGL Rendering
// Setup OpenGL viewport and clear screen
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / height;
glViewport(0, 0, width, height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Setup projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, ratio, 0.1, 10.0);
// Render
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Swap Buffers
glfwSwapBuffers(window);
}
// Free glfw memory allocated for window
glfwDestroyWindow(window);
// Exit
glfwTerminate();
exit(EXIT_SUCCESS);
}
A very verbose question I realize, but I thought it was important to explain why I think its crazy code rather than just saying "I don't get it" like it's easy to. Could someone please explain why these very clever people decided to do it this way and why there are errors. (I can find nothing about this online.)

The author is using the automatic numbering property of enums to automatically update the definition for the NumVAOs and NumBuffers values. For example, when new VAO IDs are added to the enum the NumVAOs value will still be correct as long as it is listed last in the enum.
enum VAO_IDs {
Triangles,
Polygons,
Circles,
NumVAOs
};
Most likely your compiler does not support this trick.

Related

Program crash when calling OpenGL functions

I'm trying to setup a game engine project. My visual studio project is setup so that I have an 'engine' project separate from my 'game' project. Then engine project is being compiled to a dll for the game project to use. I've already downloaded and setup glfw and glew to start using openGL. My problem is when ever I hit my first openGL function the program crashes. I know this has something to do with glewinit even though glew IS initializing successfully (no console errors). In my engine project, I have a window class where, upon window construction, glew should be setup:
Window.h
#pragma once
#include "GL\glew.h"
#include "GLFW\glfw3.h"
#if (_DEBUG)
#define LOG(x) printf(x)
#else
#define LOG(x)
#endif
namespace BlazeGraphics
{
class __declspec(dllexport) Window
{
public:
Window(short width, short height, const char* title);
~Window();
void Update();
void Clear() const;
bool Closed() const;
private:
int m_height;
int m_width;
const char* m_title;
GLFWwindow* m_window;
private:
Window(const Window& copy) {}
void operator=(const Window& copy) {}
};
}
Window.cpp (where glewinit() is called)
#include "Window.h"
#include <cstdio>
namespace BlazeGraphics
{
//Needed to define outside of the window class (not sure exactly why yet)
void WindowResize(GLFWwindow* window, int width, int height);
Window::Window(short width, short height, const char* title) :
m_width(width),
m_height(height),
m_title(title)
{
//InitializeWindow
{
if (!glfwInit())
{
LOG("Failed to initialize glfw!");
return;
};
m_window = glfwCreateWindow(m_width, m_height, m_title, NULL, NULL);
if (!m_window)
{
LOG("Failed to initialize glfw window!");
glfwTerminate();
return;
};
glfwMakeContextCurrent(m_window);
glfwSetWindowSizeCallback(m_window, WindowResize);
}
//IntializeGl
{
//This needs to be after two functions above (makecontextcurrent and setwindowresizecallback) or else glew will not initialize
**if (glewInit() != GLEW_OK)
{
LOG("Failed to initialize glew!");
}**
}
}
Window::~Window()
{
glfwTerminate();
}
void Window::Update()
{
glfwPollEvents();
glfwSwapBuffers(m_window);
}
void Window::Clear() const
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
//Returns a bool because glfwwindowShouldClose returns a nonzero number or zero
bool Window::Closed() const
{
//Made it equal to 1 to take away warning involving converting an int to bool
return glfwWindowShouldClose(m_window) == 1;
}
//Not part of window class so defined above
void WindowResize(GLFWwindow* window, int width, int height)
{
glViewport(0, 0, width, height);
}
}
Here is my main.cpp file which is found within my game project where I currently have my openGL functionality in global functions (just for now):
main.cpp
#include <iostream>
#include <array>
#include <fstream>
#include "GL\glew.h"
#include "GLFW\glfw3.h"
#include "../Engine/Source/Graphics/Window.h"
void initializeGLBuffers()
{
GLfloat triangle[] =
{
+0.0f, +0.1f, -0.0f,
0.0f, 1.0f, 0.0f,
-0.1f, -0.1f, 0.0f, //1
0.0f, 1.0f, 0.0f,
+0.1f, -0.1f, 0.0f, //2
0.0f, 1.0f, 0.0f,
};
GLuint bufferID;
glGenBuffers(1, &bufferID);
glBindBuffer(GL_ARRAY_BUFFER, bufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangle), triangle, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, (sizeof(GLfloat)) * 6, nullptr);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, (sizeof(GLfloat)) * 6, (char*)((sizeof(GLfloat)) * 3));
GLushort indices[] =
{
0,1,2
};
GLuint indexBufferID;
glGenBuffers(1, &indexBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
};
void installShaders()
{
//Create Shader
GLuint vertexShaderID = glCreateShader(GL_VERTEX_SHADER);
GLuint FragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER);
//Add source or text file to shader object
std::string temp = readShaderCode("VertexShaderCode.glsl");
const GLchar* adapter[1];
adapter[0] = temp.c_str();
glShaderSource(vertexShaderID, 1, adapter, 0);
temp = readShaderCode("FragmentShaderCode.glsl").c_str();
adapter[0] = temp.c_str();
glShaderSource(FragmentShaderID, 1, adapter, 0);
//Compile Shadaer
glCompileShader(vertexShaderID);
glCompileShader(FragmentShaderID);
if (!checkShaderStatus(vertexShaderID) || !checkShaderStatus(FragmentShaderID))
return;
//Create Program
GLuint programID = glCreateProgram();
glAttachShader(programID, vertexShaderID);
glAttachShader(programID, FragmentShaderID);
//Link Program
glLinkProgram(programID);
if (!checkProgramStatus(programID))
{
std::cout << "Failed to link program";
return;
}
//Use program
glUseProgram(programID);
}
int main()
{
BlazeGraphics::Window window(1280, 720, "MyGame");
initializeGLBuffers();
installShaders();
while (!window.Closed())
{
window.Clear();
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0);
window.Update();
};
return 0;
}
Now if I were to move the glewinit() code here in my main.cpp:
int main()
{
BlazeGraphics::Window window(1280, 720, "MyGame");
if (glewInit() != GLEW_OK)
{
LOG("Failed to initialize glew!");
}
initializeGLBuffers();
installShaders();
while (!window.Closed())
{
window.Clear();
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0);
window.Update();
};
return 0;
}
then my program compiles fine. Why does trying to initialize glew within engine.dll cause a program crash? Thanks for any help.
GLEW works by defining a function pointer as global variable for each OpenGL function. Let's look at glBindBuffer as an example:
#define GLEW_FUN_EXPORT GLEWAPI
typedef void (GLAPIENTRY * PFNGLBINDBUFFERPROC) (GLenum target, GLuint buffer);
GLEW_FUN_EXPORT PFNGLBINDBUFFERPROC __glewBindBuffer;
So we just have a __glewBindBuffer function pointer, which will be set to the correct address from your OpenGL implementation by glewInit.
To actually be able to write glBindBuffer, GLEW simply defines pre-processor macros mapping the GL functions to those function pointer variables:
#define glBindBuffer GLEW_GET_FUN(__glewBindBuffer);
Why does trying to initialize glew within engine.dll cause a program crash?
Because your engine.dll and your main application each have a separate set of all of these global variables. You would have to export all the __glew* variables from your engine DLL to be able to get access to the results of your glewInit call in engine.dll.

OpenGL Red Book with Mac OS X

I would like to work through the OpenGL Red Book, The OpenGL Programming Guide, 8th edition, using Xcode on Mac OS X.
I am unable to run the first code example, triangles.cpp. I have tried including the GLUT and GL frameworks that come with Xcode and I have searched around enough to see that I am not likely to figure this out on my own.
Assuming that I have a fresh installation of Mac OS X, and I have freshly installed Xcode with Xcode command-line tools, what are the step-by-step instructions to be able to run triangles.cpp in that environment?
Unlike this question, my preference would be not to use Cocoa, Objective-C or Swift. My preference would be to stay in C++/C only. An answer is only correct if I can follow it step-by-step and end up with a running triangles.cpp program.
My preference is Mac OS X 10.9, however a correct answer can assume 10.9, 10.10 or 10.11.
Thank you.
///////////////////////////////////////////////////////////////////////
//
// triangles.cpp
//
///////////////////////////////////////////////////////////////////////
#include <iostream>
using namespace std;
#include "vgl.h"
#include "LoadShader.h"
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
//---------------------------------------------------------------------
//
// init
//
void
init(void)
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{ -0.90, -0.90 }, // Triangle 1
{ 0.85, -0.90 },
{ -0.90, 0.85 },
{ 0.90, -0.85 }, // Triangle 2
{ 0.90, 0.90 },
{ -0.85, 0.90 }
};
glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, "triangles.vert" },
{ GL_FRAGMENT_SHADER, "triangles.frag" },
{ GL_NONE, NULL }
};
GLuint program = LoadShaders(*shaders);
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
}
//---------------------------------------------------------------------
//
// display
//
void
display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]);
glDrawArrays(GL_TRIANGLES, 0, NumVertices);
glFlush();
}
//---------------------------------------------------------------------
//
// main
//
int
main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(512, 512);
glutInitContextVersion(4, 3);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutCreateWindow(argv[0]);
glewExperimental = GL_TRUE;
if (glewInit()) {
cerr << "Unable to initialize GLEW ... exiting" << endl;
exit(EXIT_FAILURE);
}
init();
glutDisplayFunc(display);
glutMainLoop();
}
Edit 1: In response to the first comment, here is the naive effort.
Open Xcode 5.1.1 on Mac OS X 10.9.5
Create a new C++ Command-line project.
Paste over the contents of main.cpp with the contents of triangles.cpp.
Click on the project -> Build Phases -> Link Binary with Libraries
Add OpenGL.framework and GLUT.framework
Result: "/Users/xxx/Desktop/Triangles/Triangles/main.cpp:10:10: 'vgl.h' file not found"
Edit 2: Added the vgh translation unit and LoadShaders translation unit, also added libFreeGlut.a and libGlew32.a to my projects compilation/linking. Moved all of the OpenGL Book's Include contents to my projects source directory. Had to change several include statements to use quoted includes instead of angled includes. It feels like this is closer to working but it is unable to find LoadShader.h. Note that the translation unit in the OpenGL download is called LoadShaders (plural). Changing triangles.cpp to reference LoadShaders.h fixed the include problem but the contents of that translation unit don't seem to match the signatures of whats being called from triangles.cpp.
There are some issues with the source and with the files in oglpg-8th-edition.zip:
triangles.cpp uses non-standard GLUT functions that aren't included in glut, and instead are only part of the freeglut implementation (glutInitContextVersion and glutInitContextProfile). freeglut doesn't really support OS X and building it instead relies on additional X11 support. Instead of telling you how to do this I'm just going to modify the source to build with OS X's GLUT framework.
The code depends on glew, and the book's source download apparently doesn't include a binary you can use, so you'll need to build it for yourself.
Build GLEW with the following commands:
git clone git://git.code.sf.net/p/glew/code glew
cd glew
make extensions
make
Now:
Create a C++ command line Xcode project
Set the executable to link with the OpenGL and GLUT frameworks and the glew dylib you just built.
Modify the project "Header Search Paths" to include the location of the glew headers for the library you built, followed by the path to oglpg-8th-edition/include
Add oglpg-8th-edition/lib/LoadShaders.cpp to your xcode project
Paste the triangles.cpp source into the main.cpp of your Xcode project
Modify the source: replace #include "vgl.h" with:
#include <GL/glew.h>
#include <OpenGL/gl3.h>
#include <GLUT/glut.h>
#define BUFFER_OFFSET(x) ((const void*) (x))
Also make sure that the typos in the version of triangle.cpp that you include in your question are fixed: You include "LoadShader.h" when it should be "LoadShaders.h", and LoadShaders(*shaders); should be LoadShaders(shaders). (The code printed in my copy of the book doesn't contain these errors.)
Delete the calls to glutInitContextVersion and glutInitContextProfile.
Change the parameter to glutInitDisplayMode to GLUT_RGBA | GLUT_3_2_CORE_PROFILE
At this point the code builds, links, and runs, however running the program displays a black window for me instead of the expected triangles.
about fixing the black window issue as mentioned in Matthew and Bames53 comments
Follow bames53's answer
Define shader as string
const char *pTriangleVert =
"#version 410 core\n\
layout(location = 0) in vec4 vPosition;\n\
void\n\
main()\n\
{\n\
gl_Position= vPosition;\n\
}";
const char *pTriangleFrag =
"#version 410 core\n\
out vec4 fColor;\n\
void\n\
main()\n\
{\n\
fColor = vec4(0.0, 0.0, 1.0, 1.0);\n\
}";
OpenGl 4.1 supported on my iMac so i change version into 410
ShaderInfo shaders[] = {
{ GL_VERTEX_SHADER, pTriangleVert},
{ GL_FRAGMENT_SHADER, pTriangleFrag },
{ GL_NONE, NULL }
};
Modify the ShaderInfo struct slightly
change
typedef struct {
GLenum type;
const char* filename;
GLuint shader;
} ShaderInfo;
into
typedef struct {
GLenum type;
const char* source;
GLuint shader;
} ShaderInfo;
Modify loadShader function slightly
comment the code about reading shader from file
/*
const GLchar* source = ReadShader( entry->filename );
if ( source == NULL ) {
for ( entry = shaders; entry->type != GL_NONE; ++entry ) {
glDeleteShader( entry->shader );
entry->shader = 0;
}
return 0;
}
glShaderSource( shader, 1, &source, NULL );
delete [] source;*/
into
glShaderSource(shader, 1, &entry->source, NULL);
you'd better turning on DEBUG in case some shader compiling errors
you can use example from this link. It's almost the same. It uses glfw instead of glut.
http://www.tomdalling.com/blog/modern-opengl/01-getting-started-in-xcode-and-visual-cpp/
/*
main
Copyright 2012 Thomas Dalling - http://tomdalling.com/
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
//#include "platform.hpp"
// third-party libraries
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
// standard C++ libraries
#include <cassert>
#include <iostream>
#include <stdexcept>
#include <cmath>
// tdogl classes
#include "Program.h"
// constants
const glm::vec2 SCREEN_SIZE(800, 600);
// globals
GLFWwindow* gWindow = NULL;
tdogl::Program* gProgram = NULL;
GLuint gVAO = 0;
GLuint gVBO = 0;
// loads the vertex shader and fragment shader, and links them to make the global gProgram
static void LoadShaders() {
std::vector<tdogl::Shader> shaders;
shaders.push_back(tdogl::Shader::shaderFromFile("vertex-shader.txt", GL_VERTEX_SHADER));
shaders.push_back(tdogl::Shader::shaderFromFile("fragment-shader.txt", GL_FRAGMENT_SHADER));
gProgram = new tdogl::Program(shaders);
}
// loads a triangle into the VAO global
static void LoadTriangle() {
// make and bind the VAO
glGenVertexArrays(1, &gVAO);
glBindVertexArray(gVAO);
// make and bind the VBO
glGenBuffers(1, &gVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVBO);
// Put the three triangle verticies into the VBO
GLfloat vertexData[] = {
// X Y Z
0.0f, 0.8f, 0.0f,
-0.8f,-0.8f, 0.0f,
0.8f,-0.8f, 0.0f,
};
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
// connect the xyz to the "vert" attribute of the vertex shader
glEnableVertexAttribAxrray(gProgram->attrib("vert"));
glVertexAttribPointer(gProgram->attrib("vert"), 3, GL_FLOAT, GL_FALSE, 0, NULL);
// unbind the VBO and VAO
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// draws a single frame
static void Render() {
// clear everything
glClearColor(0, 0, 0, 1); // black
glClear(GL_COLOR_BUFFER_BIT);
// bind the program (the shaders)
glUseProgram(gProgram->object());
// bind the VAO (the triangle)
glBindVertexArray(gVAO);
// draw the VAO
glDrawArrays(GL_TRIANGLES, 0, 3);
// unbind the VAO
glBindVertexArray(0);
// unbind the program
glUseProgram(0);
// swap the display buffers (displays what was just drawn)
glfwSwapBuffers(gWindow);
}
void OnError(int errorCode, const char* msg) {
throw std::runtime_error(msg);
}
// the program starts here
void AppMain() {
// initialise GLFW
glfwSetErrorCallback(OnError);
if(!glfwInit())
throw std::runtime_error("glfwInit failed");
// open a window with GLFW
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
gWindow = glfwCreateWindow((int)SCREEN_SIZE.x, (int)SCREEN_SIZE.y, "OpenGL Tutorial", NULL, NULL);
if(!gWindow)
throw std::runtime_error("glfwCreateWindow failed. Can your hardware handle OpenGL 3.2?");
// GLFW settings
glfwMakeContextCurrent(gWindow);
// initialise GLEW
glewExperimental = GL_TRUE; //stops glew crashing on OSX :-/
if(glewInit() != GLEW_OK)
throw std::runtime_error("glewInit failed");
// print out some info about the graphics drivers
std::cout << "OpenGL version: " << glGetString(GL_VERSION) << std::endl;
std::cout << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
std::cout << "Vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "Renderer: " << glGetString(GL_RENDERER) << std::endl;
// make sure OpenGL version 3.2 API is available
if(!GLEW_VERSION_3_2)
throw std::runtime_error("OpenGL 3.2 API is not available.");
// load vertex and fragment shaders into opengl
LoadShaders();
// create buffer and fill it with the points of the triangle
LoadTriangle();
// run while the window is open
while(!glfwWindowShouldClose(gWindow)){
// process pending events
glfwPollEvents();
// draw one frame
Render();
}
// clean up and exit
glfwTerminate();
}
int main(int argc, char *argv[]) {
try {
AppMain();
} catch (const std::exception& e){
std::cerr << "ERROR: " << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
I have adapted the project for MAC here:
https://github.com/badousuan/openGLredBook9th
The project can build successfully and most demo can run as expected. However the original code is based on openGL 4.5,while MAC only support version 4.1,some new API calls may fail. If some target not work well, you should consider this version issue and make some adaptation
I use the code from this tutorial: http://antongerdelan.net/opengl/hellotriangle.html, and it works on my mac.
Here is the code I run.
#include <GL/glew.h> // include GLEW and new version of GL on Windows
#include <GLFW/glfw3.h> // GLFW helper library
#include <stdio.h>
int main() {
// start GL context and O/S window using the GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
return 1;
}
// uncomment these lines if on Apple OS X
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow(640, 480, "Hello Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
// start GLEW extension handler
glewExperimental = GL_TRUE;
glewInit();
// get version info
const GLubyte* renderer = glGetString(GL_RENDERER); // get renderer string
const GLubyte* version = glGetString(GL_VERSION); // version as a string
printf("Renderer: %s\n", renderer);
printf("OpenGL version supported %s\n", version);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
glEnable(GL_DEPTH_TEST); // enable depth-testing
glDepthFunc(GL_LESS); // depth-testing interprets a smaller value as "closer"
/* OTHER STUFF GOES HERE NEXT */
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
GLuint vbo = 0; // vertex buffer object
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), points, GL_STATIC_DRAW);
GLuint vao = 0; // vertex array object
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(0.5, 0.0, 0.5, 1.0);"
"}";
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vertex_shader, NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fs, 1, &fragment_shader, NULL);
glCompileShader(fs);
GLuint shader_programme = glCreateProgram();
glAttachShader(shader_programme, fs);
glAttachShader(shader_programme, vs);
glLinkProgram(shader_programme);
while(!glfwWindowShouldClose(window)) {
// wipe the drawing surface clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(shader_programme);
glBindVertexArray(vao);
// draw points 0-3 from the currently bound VAO with current in-use shader
glDrawArrays(GL_TRIANGLES, 0, 3);
// update other events like input handling
glfwPollEvents();
// put the stuff we've been drawing onto the display
glfwSwapBuffers(window);
}
// close GL context and any other GLFW resources
glfwTerminate();
return 0;
}

Segmentation Fault glDrawArrays()

I am trying to generate a terrain from a file, and display it in a window on my screen in openGL. I am getting a seg fault and I've localised it to the glDrawArrays() call in my code.
I might be calling it wrong, or my heightmap may have too many vertices for the way I am calling it.
I will link my code below and put a comment next to the segfault line.
/**
* A typical program flow and methods for rendering simple polygons
* using freeglut and openGL + GLSL
*/
#include <stdio.h>
// GLEW loads OpenGL extensions. Required for all OpenGL programs.
#include <GL/glew.h>
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
// Utility code to load and compile GLSL shader programs
#include "shader.hpp"
#include <iostream>
#include <fstream>
#include <vector>
#define WINDOW_WIDTH 400
#define WINDOW_HEIGHT 400
//#define VALS_PER_VERT 3
//#define VALS_PER_COLOUR 4
//#define NUM_VERTS 3 // Total number of vertices to load/render
#define VALS_PER_VERT_HEIGHT 5
#define VALS_PER_COLOUR_HEIGHT 4
#define HEIGHT_VERTS 5 //height map vertices per line
using namespace std;
// Handle to our VAO generated in setShaderData method
//heightmap
unsigned int vertexVaoHandleHeight;
// Handle to our shader program
unsigned int programID;
/**
* Sets the shader uniforms and vertex data
* This happens ONCE only, before any frames are rendered
* #param id, Shader program object to use
* #returns 0 for success, error otherwise
*/
int setShaderData(const unsigned int &id)
{
/*
* What we want to draw
* Each set of 3 vertices (9 floats) defines one triangle
* You can define more triangles to draw here
*/
float heightmapVerts[ HEIGHT_VERTS*VALS_PER_VERT_HEIGHT ] = {
//5
-0.9, -0.6, -0.4, -0.6, -0.9,
-0.2, 0.1, 0.3, 0.1, -0.3,
0, 0.4, 0.8, 0.4, 0,
-0.2, 0.1, 0.3, 0.1, -0.3,
0.5, -0.6, -0.4, -0.6, -0.9,
};
std::cout << "1" << endl;
// Colours for each vertex; red, green, blue and alpha
// This data is indexed the same order as the vertex data, but reads 4 values
// Alpha will not be used directly in this example program
//may cause problems because less numbers
float heightColours[ HEIGHT_VERTS*VALS_PER_COLOUR_HEIGHT ] = {
0.8f, 0.7f, 0.5f, 1.0f,
0.3f, 0.7f, 0.1f, 1.0f,
0.8f, 0.2f, 0.5f, 1.0f,
};
std::cout << "2" << endl;
// heightmap stuff ##################################################
// Generate storage on the GPU for our triangle and make it current.
// A VAO is a set of data buffers on the GPU
glGenVertexArrays(1, &vertexVaoHandleHeight);
glBindVertexArray(vertexVaoHandleHeight);
std::cout << "3" << endl;
// Generate new buffers in our VAO
// A single data buffer store for generic, per-vertex attributes
unsigned int bufferHeight[2];
glGenBuffers(2, bufferHeight);
// Allocate GPU memory for our vertices and copy them over
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*HEIGHT_VERTS*VALS_PER_VERT_HEIGHT, heightmapVerts, GL_STATIC_DRAW);
// Do the same for our vertex colours
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*HEIGHT_VERTS*VALS_PER_COLOUR_HEIGHT, heightColours, GL_STATIC_DRAW);
std::cout << "4" << endl;
// Now we tell OpenGL how to interpret the data we just gave it
// Tell OpenGL what shader variable it corresponds to
// Tell OpenGL how it's formatted (floating point, 3 values per vertex)
int vertLocHeight = glGetAttribLocation(id, "a_vertex");
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[0]);
glEnableVertexAttribArray(vertLocHeight);
glVertexAttribPointer(vertLocHeight, VALS_PER_VERT_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
std::cout << "5" << endl;
// Do the same for the vertex colours
int colourLocHeight = glGetAttribLocation(id, "a_colour");
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[1]);
glEnableVertexAttribArray(colourLocHeight);
glVertexAttribPointer(colourLocHeight, VALS_PER_COLOUR_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
// heightmap stuff ##################################################
std::cout << "6" << endl;
// An argument of zero un-binds all VAO's and stops us
// from accidentally changing the VAO state
glBindVertexArray(0);
// The same is true for buffers, so we un-bind it too
glBindBuffer(GL_ARRAY_BUFFER, 0);
std::cout << "7" << endl;
return 0; // return success
}
/**
* Renders a frame of the state and shaders we have set up to the window
* Executed each time a frame is to be drawn.
*/
void render()
{
// Clear the previous pixels we have drawn to the colour buffer (display buffer)
// Called each frame so we don't draw over the top of everything previous
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programID);
std::cout << "8" << endl;
// HEIGHT MAP STUFF ###################################
// Make the VAO with our vertex data buffer current
glBindVertexArray(vertexVaoHandleHeight);
// Send command to GPU to draw the data in the current VAO as triangles
std::cout << "8.5" << endl;
//CRASHES HERE
glDrawArrays(GL_TRIANGLES, 0, /*HEIGHT_VERTS = 5*/ 3);
std::cout << "8.75" << endl;
glBindVertexArray(0); // Un-bind the VAO
std::cout << "9" << endl;
// HEIGHT MAP STUFF ###################################
glutSwapBuffers(); // Swap the back buffer with the front buffer, showing what has been rendered
glFlush(); // Guarantees previous commands have been completed before continuing
}
/**
* Program entry. Sets up OpenGL state, GLSL Shaders and GLUT window and function call backs
* Takes no arguments
*/
int main(int argc, char **argv) {
//READ IN FILE//
std::fstream myfile("heights.csv", std::ios_base::in);
if(!myfile.good()){cout << "file not found" << endl;}
std::vector<float> numbers;
float a;
while (myfile >> a){/*printf("%f ", a);*/
numbers.push_back(a);
}
//for (int i=0; i<numbers.size();i++){cout << numbers[i] << endl;}
getchar();
//READ IN FILE//
// Set up GLUT window
glutInit(&argc, argv); // Starts GLUT systems, passing in command line args
glutInitWindowPosition(100, 0); // Positions the window on the screen relative to top left
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT); // Size in pixels
// Display mode takes bit flags defining properties you want the window to have;
// GLUT_RGBA : Set the pixel format to have Red Green Blue and Alpha colour channels
// GLUT_DOUBLE : Each frame is drawn to a hidden back buffer hiding the image construction
// GLUT_DEPTH : A depth buffer is kept so that polygons can be drawn in-front/behind others (not used in this application)
#ifdef __APPLE__
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH | GLUT_3_2_CORE_PROFILE);
#else
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH );
#endif
glutCreateWindow("Hello World!"); // Makes the actual window and displays
// Initialize GLEW
glewExperimental = true; // Needed for core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}
// Sets the (background) colour for each time the frame-buffer (colour buffer) is cleared
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
// Set up the shaders we are to use. 0 indicates error.
programID = LoadShaders("minimal.vert", "minimal.frag");
if (programID == 0)
return 1;
// Set this shader program in use
// This is an OpenGL state modification and persists unless changed
glUseProgram(programID);
// Set the vertex data for the program
if (setShaderData(programID) != 0)
return 1;
// Render call to a function we defined,
// that is called each time GLUT thinks we need to update
// the window contents, this method has our drawing logic
glutDisplayFunc(render);
// Start an infinite loop where GLUT calls methods (like render)
// set with glut*Func when needed.
// Runs until something kills the window
glutMainLoop();
return 0;
}
The size argument of glVertexAttribPointer() must be 1, 2, 3, or 4. You pass 5 here:
#define VALS_PER_VERT_HEIGHT 5
...
glVertexAttribPointer(vertLocHeight, VALS_PER_VERT_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
You should always call glGetError() if you have problems with your rendering. The call above will immediately give you a GL_INVALID_VALUE error.
Your code also looks generally inconsistent. In some places, you seem to assume that you have 3 vertices, in others 4, in others 5. Then, as shown above, you have vertices with 5 coordinates, which does not really make any sense. You may want to have a careful look at your own code, and make sure that everything is consistent with what you are trying to do.

XCode error at run-time: "address doesn't contain a section that points to a section in a object file"

I'm trying to learn OpenGL on Mac, by following the tutorials here. This involves setting up GLEW, GLFW, and GLM, which I did using homebrew. Being completely new to OpenGL, XCode, and C++, it took me a bit of googling, but I managed to figure out all the various header paths, library paths, and linker arguments to get set up.
Now I'm receiving the error "address doesn't contain a section that points to a section in a object file" when running this code. Being unfamiliar with the (frankly bizarre) XCode UI, I'm having trouble tracing the source of the problem. Google just points me at Objective-C related articles about ARC. Well, this isn't Obj-C, and I'm not using ARC, so no luck there.
Any ideas what might be causing it?
// Include standard headers
#include <stdio.h>
#include <stdlib.h>
// Include GLEW. Always include it before gl.h and glfw.h, since it's a bit magic.
#include <GL/glew.h>
// Include GLFW
#include <GL/glfw.h>
// Include GLM
#include <glm/glm.hpp>
using namespace glm;
//http://www.opengl-tutorial.org/beginners-tutorials/tutorial-1-opening-a-window/
int main(int argv, char ** argc){
// Initialise GLFW
if( !glfwInit() )
{
fprintf( stderr, "Failed to initialize GLFW\n" );
return -1;
}
glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 4); // 4x antialiasing
//http://www.glfw.org/faq.html#42__how_do_i_create_an_opengl_30_context
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, 0); //We don't want the old OpenGL // Open a window and create its OpenGL context
if( !glfwOpenWindow( 1024, 768, 0,0,0,0, 32,0, GLFW_WINDOW ) )
{
fprintf( stderr, "Failed to open GLFW window\n" );
glfwTerminate();
return -1;
}
//We can't call glGetString until the window is created (on mac).
//http://www.idevgames.com/forums/thread-4218.html
const GLubyte* v=glGetString(GL_VERSION);
printf("OpenGL version: %s\n", (char*)v);
/***** BAD CODE SOMEWHERE IN THIS BLOCK.*/
{
//http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/
GLuint vertexArrayID;
//generate a vertexArray, put the identifier in VertexArrayID
glGenVertexArrays(1, &vertexArrayID);
//bind it. (?)
glBindVertexArray(vertexArrayID);
// An array of 3 vectors which represents 3 vertices
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
// This will identify our vertex buffer
GLuint vertexBufferId;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &vertexBufferId);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferId);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
/* END BAD CODE BLOCK */
}
// Initialize GLEW
glewExperimental=true; // Needed in core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}
glfwSetWindowTitle( "Tutorial 01" );
// Ensure we can capture the escape key being pressed below
glfwEnable( GLFW_STICKY_KEYS );
do{
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferId);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(0);
// Swap buffers
glfwSwapBuffers();
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey( GLFW_KEY_ESC ) != GLFW_PRESS &&
glfwGetWindowParam( GLFW_OPENED ) );
}

glutDisplayFunc displays garbage

I am trying to incorporate openGL into my c++ code for the first time. As a start up, I made this very primitive code, which defines a class called polygon and should display a polygon with a method polygon.draw(). Right now, everything below resides in a single main.cpp file, though for easy reading I am separating into section here:
The problem is, the below code compiles and runs alright. Only when the window named "simple" is created, displays garbage (mostly collected from my computer background screen :(.
Firstly, the class polygon:
#include <GL/glut.h>
#include "utility.hpp"
#include <vector>
void init(void);
class nikPolygon{
public:
std::vector<nikPosition> m_vertices;
nikColor m_color;
double m_alpha;
// constructors
// without alpha (default is 1.0)
nikPolygon(std::vector<nikPosition> vList, nikColor c):
m_vertices(vList), m_color(c), m_alpha(1.0){
}
nikPolygon(std::vector<nikPosition> vList, nikColor c, double a):
m_vertices(vList), m_color(c), m_alpha(a){
}
// default constructor
nikPolygon(){
}
// member functions
// add vertex
void addVertex(nikPosition v) { m_vertices.push_back(v); }
// remove vertex
void removeVertex(nikPosition v);
// adjust vertex
void modifyVertex(unsigned int vIndex, nikPosition newPosition);
// fill color
void setColor(nikColor col) { m_color = col; }
// set alpha
void setAlpha(double a) { m_alpha = a; }
// display
void drawPolygon(void){
// color the objet
glColor4f(m_color.red,
m_color.green,
m_color.blue,
m_alpha);
// construct polygon
glBegin(GL_POLYGON);
for (std::vector<nikPosition>::iterator it = m_vertices.begin();
it != m_vertices.end(); it++)
glVertex2f(it->x, it->y);
glEnd();
// send to screen
glFlush();
}
void draw(void);
};
Then the c/c++ callback interface (trampoline/thunk):
// for c++/c callback
nikPolygon* currentPolygon;
extern "C"
void drawCallback(void){
currentPolygon->drawPolygon();
}
void nikPolygon::draw(){
currentPolygon = this;
glutDisplayFunc(drawCallback);
}
And then the rest of it:
// initialize openGL etc
void init(void){
// set clear color to black
glClearColor(0.0, 0.0, 0.0, 0.0);
// set fill color to white
glColor3f(1.0, 1.0, 1.0);
// enable transperancy
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// setup standard orthogonal view with clipping
// box as cube of side 2 centered at origin
// this is the default view
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-1.0, 1.0, -1.0, 1.0);
}
int main(int argc, char** argv){
nikPolygon poly;
poly.addVertex(nikPosition(-0.5, -0.5));
poly.addVertex(nikPosition(-0.5, 0.5));
poly.addVertex(nikPosition(0.5, 0.5));
poly.addVertex(nikPosition(0.5, -0.5));
poly.setColor(nikColor(0.3, 0.5, 0.1));
poly.setAlpha(0.4);
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500, 500);
glutInitWindowPosition(0, 0);
glutCreateWindow("simple");
init();
poly.draw();
glutMainLoop();
}
First and foremost, the original code is completely overengineered. This may be part of the original confusion. Also there's not really much you can do, to fix the code, without throwing out most of it. For example representing each polygon (triangle) with a own object instance is about as inefficient as it can get. You normally do not want to do this. The usual approach at representing a model is a Mesh, which consists of a list/array of vertex attributes, and a list of faces, which is in essence a list of 3-tuples defining the triangles, making up the surface of the mesh. In class form
class Mesh
{
std::vector<float> vert_position;
std::vector<float> vert_normal;
std::vector<float> vert_texUV;
std::vector<unsigned int> faces_indices;
public:
void draw();
};
Then to draw a mesh you use Vertex Arrays
void Mesh::draw()
{
// This is the API as used up to including OpenGL-2.1
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXCOORD_ARRAY);
// sizes of attributes depend on actual application
glVertexPointer(3, GL_FLOAT, 0, &vert_position[0]);
glNormalPointer(GL_FLOAT, 0, &vert_normal[0]);
glTexCoordPointer(2, GL_FLOAT, 0, &vert_texUV[0]);
glDrawElements(GL_TRIANGLES, faces_indices.size(), GL_UNSIGNED_INT, &faces_indices[0]);
}
You put references to these Mesh object instances into a list, or array, and iterate over that in the display function, calling the draw method, after setting the appropriate transformation.
std::list<Mesh> list_meshes;
void display()
{
clear_framebuffer();
set_viewport_and_projection();
for(std::list<Mesh>::iterator mesh_iter = list_meshes.begin();
mesh_iter != list_meshes.end();
mesh_iter++) {
mesh_iter->draw()
}
swap_buffers();
}
At the beginning of your drawPolygon function you need to do a glClear(GL_COLOR_BUFFER_BIT);