I'm trying to learn modern OpenGL and create classes for various primitive types such as cubes, spheres, etc. I have run into a problem, though, where my cube class isn't rendering anything in its Draw() function, but if I move it all into my main() function it works fine.
Here's the code for cube.cpp:
#include "cube.h"
#include <iostream>
GLuint Cube::indexArr[36] = {
0, 1, 2, // front face
2, 3, 1,
4, 5, 6,
6, 7, 4,
7, 3, 0,
0, 4, 7,
6, 2, 1,
1, 5, 6,
0, 1, 5,
5, 4, 0,
3, 2, 6,
6, 7, 3
};
bool Cube::cubeSetUp = false;
GLuint Cube::cubeVBO = 0;
GLuint Cube::cubeVAO = 0;
GLuint Cube::cubeEBO = 0;
Cube::Cube()
{
if (!cubeSetUp)
SetUpCube();
//define the 8 vertices that make up a cube
objectVerts = new GLfloat[24] {
-0.5f, -0.5f, -0.5f, // front bottom left 0
0.5f, -0.5f, -0.5f, // front bottom right 1
0.5f, 0.5f, -0.5f, // front top right 2
-0.5f, 0.5f, -0.5f, // front top left 3
-0.5f, -0.5f, 0.5f, // back bottom left 4
0.5f, -0.5f, 0.5f, // back bottom right 5
0.5f, 0.5f, 0.5f, // back top right 6
-0.5f, 0.5f, 0.5f // back top left 7
};
}
void Cube::Draw()
{
glBindVertexArray(cubeVAO);
//GLfloat modelMatrix[16];
//transform.affine.ConvertToOpenGLMatrix(modelMatrix);
//glUniformMatrix4fv(modelMatrixLoc, 1, GL_FALSE, modelMatrix);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
}
void Cube::SetUpCube()
{
cubeSetUp = true;
glGenVertexArrays(1, &cubeVAO);
glGenBuffers(1, &cubeVBO);
glGenBuffers(1, &cubeEBO);
glBindVertexArray(cubeVAO);
glBindBuffer(GL_ARRAY_BUFFER, cubeVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(objectVerts), objectVerts, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, cubeEBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indexArr), indexArr, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
Code for cube.h:
#ifndef __CUBE__H_
#define __CUBE__H_
#include "scene_object.h"
class Cube : public SceneObject
{
public:
/***************
* Constructor *
***************/
Cube();
/***********************
* Overloaded Function *
***********************/
void Draw();
private:
static GLuint indexArr[36];
static bool cubeSetUp;
static GLuint cubeVBO, cubeVAO, cubeEBO;
void SetUpCube();
};
#endif
Am I doing something wrong? Does OpenGL not agree with using a static array for my index array? I tried getting rid of the element buffer object and just using glDrawArrays(...), but it also didn't work. I can provide the code for the main as well, if needed.
Also, the vertex shader isn't doing anything. It just receives the vertex information and gives it to gl_Position.
Thanks for any help.
Code for main.cpp:
#define GLEW_STATIC
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <iostream>
#include <fstream>
#include <string>
#include "shader.h"
#include "cube.h"
#include "camera.h"
using namespace std;
int main(int argc, char* argv[])
{
if (argc != 3)
{
cout << "Incorrect number of arguments!" << endl;
cout << "Usage: [executable] vertexShader fragmentShader" << endl;
return -1;
}
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(800, 600, "Scene Description Language Generator", nullptr, nullptr);
if (window == nullptr)
{
std::cout << "Failed to create the GLFW window" << std::endl;
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK)
{
std::cout << "Failed to initialize GLEW" << std::endl;
return -1;
}
glEnable(GL_DEPTH_TEST);
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
Shader shader(argv[1], argv[2]);
Cube c;
GLuint modelMat = glGetUniformLocation(shader.GetProgram(), "model");
c.SetModelMatrixLoc(modelMat);
while (!glfwWindowShouldClose(window))
{
glfwPollEvents();
glClearColor(0.2f, 0.3f, 0.3f, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shader.Use();
c.Draw();
glfwSwapBuffers(window);
}
glfwTerminate();
return 0;
}
Edits: Added code for main.cpp and cube.h in case it is valuable to see.
At least two things going wrong.
First, as pointed out by pleluron, the cube constructor gets called before the application window and GL context has been created. On older systems this would usually crash. On modern computers there is probably a default GL context created by the windowing system, so the cube gets created for that. Then your application creates its own window and context and the cube is orphaned.
Creating OpenGL "objects" should only happen in a designated method, called only when it is safe to do so. In Qt it's the view setupGL method, in MacOS Cocoa it's prepareOpenGL, other systems it's the first window draw.
Second problem, as pointed out by BDL, is that sizeof(objectVerts) does not do what you think it does. A print statement will show you.
Third problem is that you're not calling glGetError(). OpenGL does not throw exceptions when you do something wrong, and often doesn't crash. Instead it just does nothing. Develop the habit of calling glGetError() at the end of each important function or method just to be sure everything is OK. If errors do happen, put more glGetErrors() in to narrow down the cause.
Related
So I am using GLEW & SDL2. At the moment I'm having trouble with drawing a simple rectangle.
I've done this before and got it to work no problem. (I have checked my old code, but could not find any signifigant differences)
Here's what the output should look like, aside from a different background color.
However, this is what the output looks like:
When I check the output with Renderdoc, all the values seem to be as they should be, but for some reason the output rectangle is signifigantly larger than the viewport. (Assuming the white border in the output window represents the viewport.)
I've googled around a bit, but could not find any solution to this issue. I'm figuring I'm just missing a single function call somewhere which is causing the program to output things in a weird way.
Here's the relevant code
game.cpp
#include "Game.h"
void Game::Game::main_loop()
{
//Placeholder setup;
float vertices[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
};
unsigned int indices[] = {
0, 1, 2,
2, 3, 0
};
unsigned int vertex_array, vertex_buffer, element_buffer;
glGenVertexArrays(1, &vertex_array);
glBindVertexArray(vertex_array);
glGenBuffers(1, &vertex_buffer);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glGenBuffers(1, &element_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, element_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
Asset::Loader l;
auto s = l.load_shader("res/GUI/Shaders/Textured_quad.vert", "res/GUI/Shaders/Textured_quad.frag");
keep_going = true;
while(keep_going)
{
handle_events();
renderer.clear();
s->use(); //Just calls 'glUseProgram' on the shader inside this object
glBindVertexArray(vertex_array);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
renderer.show();
}
}
void Game::Game::handle_events()
{
while(SDL_PollEvent(&event))
{
switch(event.type)
{
case SDL_MOUSEBUTTONDOWN:
case SDL_QUIT:
keep_going = false;
}
}
}
Renderer.cpp
#include "Renderer.h"
Game::Graphics::Renderer::Renderer()
{
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
window = SDL_CreateWindow(
"GUI",
SDL_WINDOWPOS_CENTERED_DISPLAY(1),
SDL_WINDOWPOS_CENTERED_DISPLAY(1),
640,
480,
SDL_WINDOW_OPENGL
);
context = SDL_GL_CreateContext(
window
);
if(!window)
{
std::cerr << "Window Error: " << SDL_GetError() << std::endl;
return;
}
if(!context)
{
std::cerr << "GL Context Error: " << SDL_GetError() << std::endl;
return;
}
glewInit();
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glViewport(0, 0, 640, 480);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
glDisable(GL_BLEND);
}
Game::Graphics::Renderer::~Renderer()
{
SDL_HideWindow(window);
SDL_GL_DeleteContext(context);
SDL_DestroyWindow(window);
}
void Game::Graphics::Renderer::clear()
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
void Game::Graphics::Renderer::show()
{
SDL_GL_SwapWindow(window);
}
Shader
#version 330 core
layout (location = 0) in vec3 a_pos;
void main()
{
gl_Position = vec4(a_pos, 0.0f);
}
//fragment
#version 330 core
out vec4 o_color;
void main()
{
o_color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
}
gl_Position = vec4(a_pos, 0.0f);
The X/Y/Z coordinates in gl_Position get divided by the W coordinate to get the screen coordinates. You are setting the W coordinate to 0, which means the X and Y coordinates become infinity and -infinity (1/0 and -1/0) and the Z coordinate becomes NaN.
Change it to:
vvvv
gl_Position = vec4(a_pos, 1.0f);
#define GLEW_STATIC
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <GL\glew.h>
#include <glm.hpp>
#include <iostream>
#include <fstream>
#include <string>
#define WIDTH 800
#define HEIGHT 600
#define TITLE "Dynamic"
GLFWwindow* window;
int vaoID;
float vertices[] = {-0.5f, 0.5f, 0, -0.5f, -0.5f, 0, 0.5f, 0.5f, 0, 0.5f, 0.5f, 0, -0.5f, -0.5f, 0, 0.5f, -0.5f, 0};
void loadToVAO(float vertices[]);
void update() {
loadToVAO(vertices);
while (!glfwWindowShouldClose(window)) {
glfwPollEvents();
glClear(GL_COLOR_BUFFER_BIT);
glClearColor(1, 0, 0, 1);
glDrawArrays(GL_TRIANGLES, 0, 6);
glfwSwapBuffers(window);
}
}
int main() {
if (!glfwInit())
std::cout << "Couldn't initialize GLFW!" << std::endl;
window = glfwCreateWindow(WIDTH, HEIGHT, TITLE, NULL, NULL);
glfwMakeContextCurrent(window);
glfwSwapInterval(1);
if (GLEW_OK != glewInit())
std::cout << "GLEW fucked up!" << std::endl;
std::cout << "Your GL version: " << glGetString(GL_VERSION) << std::endl;
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
update();
}
void loadShaders() {
}
void loadToVAO(float vertices[]) {
GLuint vboID;
GLuint vaoID;
glGenBuffers(1, &vboID);
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices) * 8, vertices, GL_STATIC_DRAW);
glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);
std::cout << vaoID << std::endl;
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
glEnableVertexAttribArray(0);
}
This is my code for creating and rendering a VAO that contains a VBO with the vertex positions. But unfortunately it doesn't work. It draws a triangle instead of a quad. But when I put the code of the loadToVAO function in the update function before the while loop, it works.
The sizeof operator always returns the size of the underlying type. When you copy the content of loadToVAO to the update function, sizeof(vertices) is basically sizeof(float[18]), which is the total size of the array.
But inside the loadToVAO, the sizeof(vertices) takes the function parameter vertices as input and not the global variable. Since array parameters of undefined size in C++ are treated as points of the same type we have here:
sizeof(vertices) == sizeof(float[]) == sizeof(float*)
which is the size of a pointer (4 or 8) and not the size of the data anymore.
To solve this, you can either pass the size of the array also to the function, which is the C way to go. The more modern way is to use std::array or std::vector to store the data in the first place.
Edit: On the second look I saw that you used sizeof(vertices) * 8 in the first place. I'm not really sure where the 8 comes from, since you neither have 8 elements nor does a float have a size of 8 bytes.
#define GLEW_STATIC
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <GL\glew.h>
#include <glm.hpp>
#include <iostream>
#include <fstream>
#include <string>
#define WIDTH 800
#define HEIGHT 600
#define TITLE "Dynamic"
GLFWwindow* window;
int vaoID;
float vertices[] = {-0.5f, 0.5f, 0, -0.5f, -0.5f, 0, 0.5f, 0.5f, 0, 0.5f, 0.5f, 0, -0.5f, -0.5f, 0, 0.5f, -0.5f, 0};
void loadToVAO(float vertices[]);
void update() {
loadToVAO(vertices);
while (!glfwWindowShouldClose(window)) {
glfwPollEvents();
glClear(GL_COLOR_BUFFER_BIT);
glClearColor(1, 0, 0, 1);
glDrawArrays(GL_TRIANGLES, 0, 6);
glfwSwapBuffers(window);
}
}
int main() {
if (!glfwInit())
std::cout << "Couldn't initialize GLFW!" << std::endl;
window = glfwCreateWindow(WIDTH, HEIGHT, TITLE, NULL, NULL);
glfwMakeContextCurrent(window);
glfwSwapInterval(1);
if (GLEW_OK != glewInit())
std::cout << "GLEW fucked up!" << std::endl;
std::cout << "Your GL version: " << glGetString(GL_VERSION) << std::endl;
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
update();
}
void loadShaders() {
}
void loadToVAO(float vertices[]) {
GLuint vboID;
GLuint vaoID;
glGenBuffers(1, &vboID);
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices) * 8, vertices, GL_STATIC_DRAW);
glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);
std::cout << vaoID << std::endl;
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);
glEnableVertexAttribArray(0);
}
This is my code for creating and rendering a VAO that contains a VBO with the vertex positions. But unfortunately it doesn't work. It draws a triangle instead of a quad. But when I put the code of the loadToVAO function in the update function before the while loop, it works.
The sizeof operator always returns the size of the underlying type. When you copy the content of loadToVAO to the update function, sizeof(vertices) is basically sizeof(float[18]), which is the total size of the array.
But inside the loadToVAO, the sizeof(vertices) takes the function parameter vertices as input and not the global variable. Since array parameters of undefined size in C++ are treated as points of the same type we have here:
sizeof(vertices) == sizeof(float[]) == sizeof(float*)
which is the size of a pointer (4 or 8) and not the size of the data anymore.
To solve this, you can either pass the size of the array also to the function, which is the C way to go. The more modern way is to use std::array or std::vector to store the data in the first place.
Edit: On the second look I saw that you used sizeof(vertices) * 8 in the first place. I'm not really sure where the 8 comes from, since you neither have 8 elements nor does a float have a size of 8 bytes.
I have been fiddling around with making a game/rendering engine, and I have found that I can have a class for a shader object, but if I wrap a VAO in a class, it won't render.
The shaders return no errors, and the VAO and shaders are valid OpenGL objects.
UPDATE
The problem is this line:
glBufferData(GL_ARRAY_BUFFER, sizeof(arrFVertex), arrFVertex, GL_STATIC_DRAW);
As #BDL suggested in the comments, I thought about it and I realized, it should be:
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * intNumVertex * 3, arrFVertex, GL_STATIC_DRAW);
UPDATE 2
In response to being put on hold, here is a Minimum Complete and Verifiable Example:
#include <OpenGL/gl3.h>
#include <SDL2/SDL.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
SDL_Window *window = NULL;
SDL_GLContext openGLRenderer;
bool bolRunning = true;
int intGLVersionMajor, intGLVersionMinor;
GLfloat arrFVertex[] = {
0.5f, 0.5f, 0.0f, // Top Right
0.5f, -0.5f, 0.0f, // Bottom Right
-0.5f, 0.5f, 0.0f, // Top Left
0.5f, -0.5f, 0.0f, // Bottom Right
-0.5f, -0.5f, 0.0f, // Bottom Left
-0.5f, 0.5f, 0.0f // Top Left
};
GLuint intVAO;
GLuint intVBO;
GLuint intShaderAttribPosition;
GLuint intShaderProgram;
GLuint intNumVertex = 6;
void loadShaders(const char *strVertexShaderSource, const char *strFragmentShaderSource) {
intShaderProgram = glCreateProgram();
GLuint intVertexShader;
intVertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(intVertexShader, 1, &strVertexShaderSource, NULL);
glCompileShader(intVertexShader);
GLuint intFragmentShader;
intFragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(intFragmentShader, 1, &strFragmentShaderSource, NULL);
glCompileShader(intFragmentShader);
glAttachShader(intShaderProgram, intVertexShader);
glAttachShader(intShaderProgram, intFragmentShader);
glLinkProgram(intShaderProgram);
glDeleteShader(intVertexShader);
glDeleteShader(intFragmentShader);
}
void buildVAO(GLfloat *arrFVertex) {
intShaderAttribPosition = glGetAttribLocation(intShaderProgram, "f3Position");
glGenVertexArrays(1, &intVAO);
glBindVertexArray(intVAO);
glGenBuffers(1, &intVBO);
glBindBuffer(GL_ARRAY_BUFFER, intVBO);
glVertexAttribPointer(intShaderAttribPosition, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid *)0);
glEnableVertexAttribArray(intShaderAttribPosition);
glBufferData(GL_ARRAY_BUFFER, sizeof(arrFVertex), arrFVertex, GL_STATIC_DRAW);
glBindVertexArray(0);
}
int main(int argc, char **argv) {
SDL_Init(SDL_INIT_EVERYTHING);
window = SDL_CreateWindow("GSEngine",
SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
640, 480,
SDL_WINDOW_OPENGL);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
if (window == NULL) {
printf("Could not create window: %s\n", SDL_GetError());
exit(1);
}
openGLRenderer = SDL_GL_CreateContext(window);
SDL_GL_MakeCurrent(window, openGLRenderer);
glViewport(0, 0, 640, 480);
loadShaders("#version 330 core\n\
in vec3 f3Position;\n\
void main() {\n\
gl_Position = vec4(f3Position, 1.0);\n\
}", "#version 330 core\n\
out vec4 f4Color;\n\
void main() {\n\
f4Color = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n\
}");
buildVAO(arrFVertex);
while (bolRunning) {
SDL_Event event;
while (SDL_PollEvent(&event)) {
if (event.type == SDL_QUIT) {
bolRunning = false;
}
}
SDL_GL_MakeCurrent(window, openGLRenderer);
glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(intShaderProgram);
glDrawArrays(GL_TRIANGLES, 0, intNumVertex);
SDL_GL_SwapWindow(window);
}
glDeleteBuffers(1, &intVBO);
glDeleteVertexArrays(1, &intVAO);
glDeleteShader(intShaderProgram);
SDL_GL_DeleteContext(openGLRenderer);
SDL_DestroyWindow(window);
SDL_Quit();
return 0;
}
The problem has nothing to do with the VAO, but with the VBO. Since you pass a pointer to the constructor:
void GSMesh::build(GLfloat *arrFVertex, GSShader *shader, int _intNumVertex)
{
glBufferData(GL_ARRAY_BUFFER, sizeof(arrFVertex), arrFVertex, GL_STATIC_DRAW);
}
sizeof(arrFVertex) = sizeof(GLfloat*) which is the size of the pointer, not the size of the array pointed to. The correct code will look like this:
glBufferData(GL_ARRAY_BUFFER,
sizeof(GLfloat) * _intNumVertex * 3, arrFVertex,
GL_STATIC_DRAW);
In general I have to add, that this is not the way how questions should be asked on SO. It would have been good if you would have included at least the relevant parts of the code in your question.
Despite what the spec says, with some drivers you must enable the shader before you can get the location of an attribute or uniform. This may be what is causing your problems.
In your code that would mean in your GSMesh::build method adding:
shader->use();
Before:
intShaderAttribPosition = glGetAttribLocation(shader->intShaderProgram, "f3Position");
Personally if the version of OpenGL that you are using has support for Vertex Attribute Indexes I'd use them instead.
In the vertex shader you could have something like:
layout (location = 0) in vec3 position;
layout (location = 1) in vec2 tex_coords;
And then in your mesh class all you need is:
struct Vertex
{
glm::vec3 position;
glm::vec2 tex_coords;
};
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex, position);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex, tex_coords));
So I'm trying to write a simple 3D rendering engine using GLFW and GLEW in C++. However the program crashes on glDrawArrays(GL_TRIANGLES, 0, model.indicesCount); call. I'm pretty sure I'm doing something wrong but I can't figure out where or what needs to be changed/altered. I'm actually rewriting a perfectly working engine from Java.
My code:
common.h:
#ifndef _COMMON
#define _COMMON
// standard stuff
#include <iostream>
#include <list>
// openGL stuff
#include "GL\glew.h"
#include "GLFW\glfw3.h"
// my stuff
#include "DisplayManager.h"
#include "RawModel.h"
#include "Loader.h"
#include "Renderer.h"
#endif
DisplayManager.h:
#pragma once
#include "common.h"
class DisplayManager{
private:
GLFWwindow* window;
public:
void create(int width = 1280, int height = 720, std::string title = "Untitled"){
if(!glfwInit()){
std::cerr << "GLFW init failed\n";
system("pause");
exit(EXIT_FAILURE);
}
glfwWindowHint(GLFW_SAMPLES, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
window = glfwCreateWindow(width, height, title.c_str(), NULL, NULL);
glfwMakeContextCurrent(window);
if(!window){
std::cerr << "Failed to create a window\n";
system("pause");
exit(EXIT_FAILURE);
}
glewExperimental = GL_TRUE;
if(glewInit() != GLEW_OK){
std::cerr << "GLEW init failed\n";
system("pause");
glfwTerminate();
exit(EXIT_FAILURE);
}
}
void update(){
glfwSwapBuffers(window);
glfwPollEvents();
}
int isCloseRequested(){
return glfwWindowShouldClose(window);
}
void close(){
glfwDestroyWindow(window);
glfwTerminate();
}
};
RawModel.h:
#pragma once
struct RawModel{
public:
GLuint vaoID;
GLuint indicesCount;
RawModel(GLuint vaoID, GLuint indicesCount){
this->vaoID = vaoID;
this->indicesCount = indicesCount;
}
};
Loader.h:
#pragma once
#include "common.h"
#define VERTEX_ATTRIB_INDEX 0
#define VERTEX_SIZE 3
class Loader{
public:
// functions
RawModel loadModel(const GLfloat vertices[], GLuint verticesCount){
GLuint vaoID = createAndBindVao();
storeFloatDataInVAO(VERTEX_ATTRIB_INDEX, vertices, verticesCount, VERTEX_SIZE);
unbindVAO();
return RawModel(vaoID, verticesCount);
}
void cleanUp(){
std::list<GLuint>::iterator vao_it = vaos.begin();
for(; vao_it != vaos.end(); ++vao_it){
const GLuint vao = *vao_it;
glDeleteVertexArrays(1, &vao);
}
std::list<GLuint>::iterator vbo_it = vbos.begin();
for(; vbo_it != vbos.end(); ++vbo_it){
const GLuint vbo = *vbo_it;
glDeleteBuffers(1, &vbo);
}
}
private:
// variables
std::list<GLuint> vaos;
std::list<GLuint> vbos;
// functions
GLuint createAndBindVao(){
GLuint vaoID;
glGenVertexArrays(1, &vaoID);
vaos.push_back(vaoID);
glBindVertexArray(vaoID);
return vaoID;
}
void storeFloatDataInVAO(const GLuint attributeIndex, const GLfloat data[], const GLuint dataLength, const GLuint chunkSize){
GLuint vboID;
glGenBuffers(1, &vboID);
vbos.push_back(vboID);
glBindBuffer(GL_VERTEX_ARRAY, vboID);
glBufferData(GL_VERTEX_ARRAY, sizeof(GLfloat) * dataLength * chunkSize, data, GL_STATIC_DRAW);
glVertexAttribPointer(attributeIndex, chunkSize, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer(GL_VERTEX_ARRAY, 0);
}
void unbindVAO(){
glBindVertexArray(0);
}
};
Renderer.h:
#pragma once
#include "common.h"
#define BLACK 0.0f, 0.0f, 0.0f, 1.0f
#define WHITE 1.0f, 1.0f, 1.0f, 1.0f
#define RED 1.0f, 0.0f, 0.0f, 1.0f
#define GREEN 0.0f, 1.0f, 0.0f, 1.0f
#define BLUE 0.0f, 0.0f, 1.0f, 1.0f
#define YELLOW 1.0f, 1.0f, 0.0f, 1.0f
class Renderer{
public:
void prepare(){
glClear(GL_COLOR_BUFFER_BIT);
glClearColor(YELLOW);
};
void render(RawModel model){
glBindVertexArray(model.vaoID);
glEnableVertexAttribArray(VERTEX_ATTRIB_INDEX);
glDrawArrays(GL_TRIANGLES, 0, model.indicesCount);
glDisableVertexAttribArray(VERTEX_ATTRIB_INDEX);
glBindVertexArray(0);
}
};
and the Source.cpp with the main function:
#include "common.h"
static const GLfloat VERTICES[] = {
// X Y Z
-0.5f, 0.5f, 0,
-0.5f, -0.5f, 0,
0.5f, 0.5f, 0
};
int main(){
DisplayManager display;
display.create();
Loader loader;
RawModel model = loader.loadModel(VERTICES, 3);
Renderer renderer;
// main loop
while(!display.isCloseRequested()){
renderer.prepare();
renderer.render(model);
display.update();
}
loader.cleanUp();
display.close();
return EXIT_SUCCESS;
}
If I comment out the glDrawArrays(GL_TRIANGLES, 0, model.indicesCount); it seams to be working and I get a green window.
The error is here:
glBindBuffer(GL_VERTEX_ARRAY, vboID);
glBufferData(GL_VERTEX_ARRAY, sizeof(GLfloat) * dataLength * chunkSize, data, GL_STATIC_DRAW);
glVertexAttribPointer(attributeIndex, chunkSize, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer(GL_VERTEX_ARRAY, 0);
GL_VERTEX_ARRAY is not a valid buffer target in OpenGL, the correct one is GL_ARRAY_BUFFER. As a consequence, these commands all should generate a GL error. The attrib pointer function will should generate GL_INVALID_OPERATION, since no GL_ARRAY_BUFFER is bound at the time of the call, the others should just generate GL_INVALID_ENUM. Now you have basically an uninitialized vertex attribute pointer, you later enable that attribute array and try to draw from it, resulting in the crash.
Another thing: I don't see any shaders in your code. Shaders are mandatory in the core profile, which you use. Now glDrawElements() actually should fail, although some implementors ignore that and use some trivial shaders in that scenario.