gl_deletebuffers not working in a separate thread - opengl

In my rendering application if i run the render function in the main loop than everything works fine but if i take the rendering function to another thread than the destructor of the objects is not able to release the buffer.
when any object gets destroyed
The destructor for the objects are called but it seems as if gl_deletebuffers are not able to release the buffer.
How i came to this conclusion
1) when i run everthing in the main loop and if i create a object and the VAO number for the object is 1
2) after destroying the object the next object VAO is also assigned number 1.
///////////////////////////////////////////////////////////////////////////////////////////////////////////
1) But when Rendering part goes to a seprate thread than VAO number keeps on incrementing with every object
2) System Ram memory also keeps increasing and when i close the application than only the memory is released.
3) Destructor for objects is definitely called when i delete a object but it seems as if destructor has not been able to release the buffer.
//#define GLEW_STATIC
#include <gl\glew.h>
#include <glfw3.h>
#include "TreeModel.h"
#include "ui_WavefrontRenderer.h"
#include <QtWidgets/QApplication>
#include <QMessageBox>
#include <thread>
#define FPS_WANTED 60
const double limitFPS = 1.0 / 50.0;
Container cont;
const unsigned int SCR_WIDTH = 800;
const unsigned int SCR_HEIGHT = 600;
void framebuffer_size_callback(GLFWwindow* window, int width, int height);
void processInput(GLFWwindow *window);
GLFWwindow* window = nullptr;
void RenderThread(WavefrontRenderer* w)
{
glfwMakeContextCurrent(window);
GLenum GlewInitResult;
glewExperimental = GL_TRUE;
GlewInitResult = glewInit();
if (GLEW_OK != GlewInitResult) // Check if glew is initialized properly
{
QMessageBox msgBox;
msgBox.setText("Not able to Initialize Glew");
msgBox.exec();
glfwTerminate();
}
if (window == NULL)
{
QMessageBox msgBox;
msgBox.setText("Not able to create GL Window");
msgBox.exec();
glfwTerminate();
//return -1;
}
w->InitData();
glEnable(GL_MULTISAMPLE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
while (!glfwWindowShouldClose(window))
{
// input
// -----
processInput(window);
// - Measure time
glClearColor(0.3, 0.3, 0.3, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
w->render(); // DO the Rendering
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
std::terminate();
}
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
cont.SetName("RootItem");
TreeModel* model = new TreeModel("RootElement", &cont);
WavefrontRenderer w(model);
w.show();
glfwInit();
glfwWindowHint(GLFW_RESIZABLE, GL_TRUE);
glfwWindowHint(GLFW_SAMPLES, 4);
window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "Renderer", nullptr, nullptr); // Create the render
window
glfwMakeContextCurrent(0);
std::thread renderThread(RenderThread, &w);
renderThread.detach();
return a.exec();
return 0;
}
Class defination for a Object
the render function w->render() calls the draw() function of a object.
The Base class has a virtual destructor.
#include "Triangle.h"
#include "qdebug.h"
#include "qmessagebox.h"
float verticesTriangle[] = {
-50.0f, -50.0f, 0.0f, 0.0f , 0.0f,1.0f ,0.0f, 0.0f,
50.0f, -50.0f, 0.0f, 0.0f , 0.0f,1.0f ,1.0f, 0.0f,
0.0f, 50.0f, 0.0f, 0.0f, 0.0f,1.0f ,0.5f, 1.0f
};
Triangle::Triangle() : Geometry("TRIANGLE", true)
{
this->isInited = 0;
this->m_VBO = 0;
this->m_VAO = 0;
this->iNumsToDraw = 0;
this->isChanged = true;
}
Triangle::Triangle(const Triangle& triangle) : Geometry( triangle )
{
CleanUp();
this->isInited = 0;
this->m_VBO = 0;
this->m_VAO = 0;
this->iNumsToDraw = triangle.iNumsToDraw;
this->isChanged = true;
this->shader = ResourceManager::GetShader("BasicShader");
iEntries = 3;
}
Triangle& Triangle::operator=(const Triangle& triangle)
{
CleanUp();
Geometry::operator=(triangle);
this->isInited = 0;
this->m_VBO = 0;
this->m_VAO = 0;
this->iNumsToDraw = triangle.iNumsToDraw;
this->isChanged = true;
this->shader = ResourceManager::GetShader("BasicShader");
return (*this);
}
void Triangle::init()
{
glGenVertexArrays(1, &m_VAO);
glGenBuffers(1, &m_VBO);
glBindVertexArray(m_VAO);
glBindBuffer(GL_ARRAY_BUFFER, m_VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(verticesTriangle), verticesTriangle, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(6 * sizeof(float)));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
isInited = true;
}
void Triangle::CleanUp()
{
if (!this->isInited)
{
return;
}
if (this->m_VAO)
glDeleteVertexArrays(1, &this->m_VAO);
if (this->m_VBO)
glDeleteBuffers(1, &this->m_VBO);
this->isInited = false;
}
void Triangle::draw()
{
if (isChanged)
{
init();
isChanged = false;
}
this->shader.Use();
glBindVertexArray(m_VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
}
Triangle::~Triangle()
{
if (this->m_VAO)
glDeleteVertexArrays(1, &this->m_VAO);
if (this->m_VBO)
glDeleteBuffers(1, &this->m_VBO);
this->isInited = false;
}

OpenGL contexts are thread local state:
Every thread has exactly one or no OpenGL contexts active in it at any given time.
Each OpenGL context must be active in no or exactly one thread at any given time.
OpenGL contexts are not automatically migrated between thread.
I.e. if you don't explicitly unmake current the OpenGL context in question on the threads it's currently active, and subsequently make it active on the thread you're calling glDeleteBuffers on, the call on that will have no effect; on the context you expected it to have an effect on, at least.

Related

C++ Strange Access Violation with OpenGL [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I'm pretty new to C++ so I hope I can get some help here.
I try to port my Game Engine to C++ but C++ behaves a litle bit... "Strange".
Following Situation:
if I run test1() It all works as it should.
main.cpp
#include <iostream>
#include "../headers/base.h"
#include "../headers/DemoGame.h"
#include "../headers/TestShader.h"
using namespace std;
using namespace engine;
void run(TestShader* t, GLuint VAO, GLFWwindow* w)
{
glfwPollEvents();
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(t->progID);
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
glfwSwapBuffers(w);
}
void test1()
{
Window w = Window(800, 600, "test");
TestShader t = TestShader();
GLuint VAO, VBO;
GLfloat vertices[9] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
while (!glfwWindowShouldClose(w.getWindow()))
{
run(&t, VAO, w.getWindow());
}
}
void test2()
{
DemoGame game = DemoGame();
game.start();
}
int main()
{
test1();
return 0;
}
If i'm running test2() with following involved classes:
Engine.h
#pragma once
#ifndef H_ENGINE
#define H_ENGINE
#include "base.h"
namespace engine
{
class Engine
{
private:
bool running;
public:
void start()
{
init();
process();
}
void stop()
{
this->running = false;
}
private:
void process()
{
update();
}
public:
virtual void init() = 0;
virtual void update() = 0;
virtual void render() = 0;
virtual void terminate() = 0;
};
}
#endif
DemoGame.h
#pragma once
#ifndef DEMO_DEMO_GAME
#define DEMO_DEMO_GAME
#include "base.h"
#include "Window.h"
#include "Engine.h"
#include "TestShader.h"
using namespace engine;
class DemoGame : public Engine
{
public:
Window* w;
TestShader* t;
GLuint VBO, VAO;
public:
DemoGame() : Engine() { }
public:
void init();
void update();
void render();
void terminate();
};
#endif
DemoGame.cpp
#include "../headers/DemoGame.h"
#include <iostream>
using namespace std;
void DemoGame::init()
{
cout << "ping" << endl;
Window wi = Window(800, 600, "test");
w = &wi;
TestShader te = TestShader();
t = &te;
GLfloat vertices[9] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
while (!glfwWindowShouldClose(w->getWindow()))
{
render();
}
}
void DemoGame::update()
{
}
void DemoGame::render()
{
glfwPollEvents();
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(t->progID);
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
glfwSwapBuffers(w->getWindow());
}
void DemoGame::terminate()
{
}
It works aswell. But as you may see Engine.h is supposed to control the mainloop. If I change the code a little bit:
Engine.h
#pragma once
#ifndef H_ENGINE
#define H_ENGINE
#include "base.h"
namespace engine
{
class Engine
{
private:
bool running;
public:
void start()
{
init();
running = true;
while (running)
{
process();
}
}
void stop()
{
this->running = false;
}
private:
void process()
{
update();
}
public:
virtual void init() = 0;
virtual void update() = 0;
virtual void render() = 0;
virtual void terminate() = 0;
};
}
#endif
DemoGame.cpp
#include "../headers/DemoGame.h"
#include <iostream>
using namespace std;
void DemoGame::init()
{
cout << "ping" << endl;
Window wi = Window(800, 600, "test");
w = &wi;
TestShader te = TestShader();
t = &te;
GLfloat vertices[9] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
void DemoGame::update()
{
render();
}
void DemoGame::render()
{
glfwPollEvents();
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(t->progID);
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
glfwSwapBuffers(w->getWindow());
}
void DemoGame::terminate()
{
}
Now all of a sudden I get an "Access Violation". The question is why?
The file "base.h" just contains
#define GLEW_STATIC
#include "GL/glew.h"
#include "GLFW/glfw3.h"
and the classes Window and TestShader shouldn't matter because they work in the first two examples. As I stated before I'm pretty new to C++ and I just don't understand why that doesn't work. Can you please help me finding out at least why that doesn't work or better help me solving the problem.
This is my second attempt to get a useful answer from StackOverflow by posting a question. Please do me a favour. Please consider read the situation before you mark this question an duplicate. The last time it wasn't an duplicate the problem was by far different.
Edit
As requested the Error message(sry I'm at work so the language is german)
Ausnahme ausgelöst bei 0x0126489D in GLFWGame.exe: 0xC0000005:
Zugriffsverletzung beim Lesen an Position 0xCCCCCEA4.
Falls ein Handler für diese Ausnahme vorhanden ist, kann das Programm
möglicherweise weiterhin sicher ausgeführt werden.
And I'll try to shorten the code to the most important.
You store addresses of stack objects that get deleted. For example,
Window wi = Window(800, 600, "test");
w = &wi;
Creates a local variable wi on the stack, which gets deleted automatically when it goes out of scope (which is the case at the end of the function). After that, w will point to an address that is already freed, which will lead to big troubles when you try to access this variables later on as you do here:
glfwSwapBuffers(w->getWindow());
If you want to create the window object on the heap, you have to use the following code in DemoGame::init():
w = new Window(800, 600, "test");
Don't forget to delete this object manually by calling delete w when you don't need it anymore. The same problem also occures for the TestShader instance.
Side note: Window wi = Window(800, 600, "test"); is still a strange syntax when creating objects on the stack. The correct way would be Window wi(800, 600, "test"); Have a look at this posts for why this makes a difference: Calling constructors in c++ without new
Edit: Your first example just works because you are calling the render function inside the init function, thus the objects do not get out of scope. Storing pointers to local object is still not good practice.
Your problem is here:
Window wi = Window(800, 600, "test");
w = &wi;
TestShader te = TestShader();
t = &te;
Both, the instance of Window as well as the instance of TestShader are local variables that will get cleaned up as soon as they go out of scope (end of init) and hence remembering their addresses has no meaning. You will need to create those instances dynamically (new) or set them up within your class definition.

Program crash when calling OpenGL functions

I'm trying to setup a game engine project. My visual studio project is setup so that I have an 'engine' project separate from my 'game' project. Then engine project is being compiled to a dll for the game project to use. I've already downloaded and setup glfw and glew to start using openGL. My problem is when ever I hit my first openGL function the program crashes. I know this has something to do with glewinit even though glew IS initializing successfully (no console errors). In my engine project, I have a window class where, upon window construction, glew should be setup:
Window.h
#pragma once
#include "GL\glew.h"
#include "GLFW\glfw3.h"
#if (_DEBUG)
#define LOG(x) printf(x)
#else
#define LOG(x)
#endif
namespace BlazeGraphics
{
class __declspec(dllexport) Window
{
public:
Window(short width, short height, const char* title);
~Window();
void Update();
void Clear() const;
bool Closed() const;
private:
int m_height;
int m_width;
const char* m_title;
GLFWwindow* m_window;
private:
Window(const Window& copy) {}
void operator=(const Window& copy) {}
};
}
Window.cpp (where glewinit() is called)
#include "Window.h"
#include <cstdio>
namespace BlazeGraphics
{
//Needed to define outside of the window class (not sure exactly why yet)
void WindowResize(GLFWwindow* window, int width, int height);
Window::Window(short width, short height, const char* title) :
m_width(width),
m_height(height),
m_title(title)
{
//InitializeWindow
{
if (!glfwInit())
{
LOG("Failed to initialize glfw!");
return;
};
m_window = glfwCreateWindow(m_width, m_height, m_title, NULL, NULL);
if (!m_window)
{
LOG("Failed to initialize glfw window!");
glfwTerminate();
return;
};
glfwMakeContextCurrent(m_window);
glfwSetWindowSizeCallback(m_window, WindowResize);
}
//IntializeGl
{
//This needs to be after two functions above (makecontextcurrent and setwindowresizecallback) or else glew will not initialize
**if (glewInit() != GLEW_OK)
{
LOG("Failed to initialize glew!");
}**
}
}
Window::~Window()
{
glfwTerminate();
}
void Window::Update()
{
glfwPollEvents();
glfwSwapBuffers(m_window);
}
void Window::Clear() const
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
//Returns a bool because glfwwindowShouldClose returns a nonzero number or zero
bool Window::Closed() const
{
//Made it equal to 1 to take away warning involving converting an int to bool
return glfwWindowShouldClose(m_window) == 1;
}
//Not part of window class so defined above
void WindowResize(GLFWwindow* window, int width, int height)
{
glViewport(0, 0, width, height);
}
}
Here is my main.cpp file which is found within my game project where I currently have my openGL functionality in global functions (just for now):
main.cpp
#include <iostream>
#include <array>
#include <fstream>
#include "GL\glew.h"
#include "GLFW\glfw3.h"
#include "../Engine/Source/Graphics/Window.h"
void initializeGLBuffers()
{
GLfloat triangle[] =
{
+0.0f, +0.1f, -0.0f,
0.0f, 1.0f, 0.0f,
-0.1f, -0.1f, 0.0f, //1
0.0f, 1.0f, 0.0f,
+0.1f, -0.1f, 0.0f, //2
0.0f, 1.0f, 0.0f,
};
GLuint bufferID;
glGenBuffers(1, &bufferID);
glBindBuffer(GL_ARRAY_BUFFER, bufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangle), triangle, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, (sizeof(GLfloat)) * 6, nullptr);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, (sizeof(GLfloat)) * 6, (char*)((sizeof(GLfloat)) * 3));
GLushort indices[] =
{
0,1,2
};
GLuint indexBufferID;
glGenBuffers(1, &indexBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
};
void installShaders()
{
//Create Shader
GLuint vertexShaderID = glCreateShader(GL_VERTEX_SHADER);
GLuint FragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER);
//Add source or text file to shader object
std::string temp = readShaderCode("VertexShaderCode.glsl");
const GLchar* adapter[1];
adapter[0] = temp.c_str();
glShaderSource(vertexShaderID, 1, adapter, 0);
temp = readShaderCode("FragmentShaderCode.glsl").c_str();
adapter[0] = temp.c_str();
glShaderSource(FragmentShaderID, 1, adapter, 0);
//Compile Shadaer
glCompileShader(vertexShaderID);
glCompileShader(FragmentShaderID);
if (!checkShaderStatus(vertexShaderID) || !checkShaderStatus(FragmentShaderID))
return;
//Create Program
GLuint programID = glCreateProgram();
glAttachShader(programID, vertexShaderID);
glAttachShader(programID, FragmentShaderID);
//Link Program
glLinkProgram(programID);
if (!checkProgramStatus(programID))
{
std::cout << "Failed to link program";
return;
}
//Use program
glUseProgram(programID);
}
int main()
{
BlazeGraphics::Window window(1280, 720, "MyGame");
initializeGLBuffers();
installShaders();
while (!window.Closed())
{
window.Clear();
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0);
window.Update();
};
return 0;
}
Now if I were to move the glewinit() code here in my main.cpp:
int main()
{
BlazeGraphics::Window window(1280, 720, "MyGame");
if (glewInit() != GLEW_OK)
{
LOG("Failed to initialize glew!");
}
initializeGLBuffers();
installShaders();
while (!window.Closed())
{
window.Clear();
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0);
window.Update();
};
return 0;
}
then my program compiles fine. Why does trying to initialize glew within engine.dll cause a program crash? Thanks for any help.
GLEW works by defining a function pointer as global variable for each OpenGL function. Let's look at glBindBuffer as an example:
#define GLEW_FUN_EXPORT GLEWAPI
typedef void (GLAPIENTRY * PFNGLBINDBUFFERPROC) (GLenum target, GLuint buffer);
GLEW_FUN_EXPORT PFNGLBINDBUFFERPROC __glewBindBuffer;
So we just have a __glewBindBuffer function pointer, which will be set to the correct address from your OpenGL implementation by glewInit.
To actually be able to write glBindBuffer, GLEW simply defines pre-processor macros mapping the GL functions to those function pointer variables:
#define glBindBuffer GLEW_GET_FUN(__glewBindBuffer);
Why does trying to initialize glew within engine.dll cause a program crash?
Because your engine.dll and your main application each have a separate set of all of these global variables. You would have to export all the __glew* variables from your engine DLL to be able to get access to the results of your glewInit call in engine.dll.

OpenGL Draw Points with indeces

I was trying to draw a single point at 0,0 but my screen turns white and stops responding.
Can someone look into the code and see where I am making a mistake?
float *vertices;
GLubyte *pindices;
int width, height;
GLuint vboHandlePoints[1];
GLuint indexVBOPoints;
int numberOfPoints;
GLuint buf;
void initVboPoints(){
GLenum err = glewInit();
if (err != GLEW_OK)
{
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
}
glGenBuffers(1, &vboHandlePoints[0]); // create a VBO handle
glBindBuffer(GL_ARRAY_BUFFER, vboHandlePoints[0]); // bind the handle to the current VBO
glBufferData(GL_ARRAY_BUFFER, 1 * 2 * 4, vertices, GL_STATIC_DRAW); // allocate space and copy the data over
glBindBuffer(GL_ARRAY_BUFFER, 0); // clean up
glGenBuffers(1, &indexVBOPoints);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexVBOPoints);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLubyte)*1 * 2, pindices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); //clean up
}
void display(){
glClearColor(0, 1, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glColor4f(0, 0, 0, 1);
glPointSize(5);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vboHandlePoints[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexVBOPoints);
glVertexPointer(2, GL_FLOAT, 0, 0);// 2-dimension
glDrawElements(GL_POINTS, 1, GL_UNSIGNED_BYTE, (char*)NULL + 0);
}
void initializeGlut(int argc, char** argv){
glutInit(&argc, argv);
width = 400;
height = 400;
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutInitWindowSize(width, height);
glutCreateWindow("Bhavya's Program");
glutDisplayFunc(display);
}
void initNormal(){
vertices = new float[2];
pindices = new GLubyte[1];
vertices[0] = 0;
vertices[1] = 0;
pindices[0] = 0;
}
int main(int argc, char** argv){
initNormal();
initializeGlut(argc, argv);
initVboPoints();
glutMainLoop();
return 0;
}
You have no GL_ARRAY_BUFFER bound at the time of the glVertexPointer call. All of the gl*Pointer functions reference the currently bound GL_ARRAY_BUFFER, the buffer name becomes part of the pointer. That way, you can source each attribute from a different VBO. The GL_ARRAY_BUFFER binding is totally irrelevant for the glDraw* family of functions.
Since you use deprecated legacy GL, binding VBO 0 is still valid, and your pointer references the client memory. So your application is likely to just crash because you told the GL to read the data at memory address 0...
The problem is that glFlush(); is required

OpenGL Mesh Wrong Position

I am trying to create a simple triangle mesh and after figuring out why I just got a blanc screen (for some reason the x64 configuration was giving me problems) I am facing a new issue:
The position of my vertices don't turn out to be how I want them. And I have no idea why.
What I should be getting is a triangle which looks about like this:
What I am getting though is this:
I use GLEW 1.10.0 for loading OpenGL, GLM 0.9.5.4 for OpenGL math stuff and SDL 2.0.3 for window stuff. Everything running on Windows 8.1 in Visual Studio 2013 Ultimate with the latest Nvidia graphics drivers.
Main.cpp:
#include "Display.h"
#include "Shader.h"
#include "Mesh.h"
using namespace std;
int main(int argc, char** argv)
{
Display display(1200, 800, "Hello World");
Vertex vertecies[] =
{
Vertex(vec3(-0.5, -0.5, 0)),
Vertex(vec3(0.5, -0.5, 0)),
Vertex(vec3(0, 0.5, 0))
};
Mesh mesh(vertecies, sizeof(vertecies) / sizeof(vertecies[0]));
Shader shader(".\\res\\BasicShader");
while (!display.IsClosed())
{
display.Clear(1.0f, 1.0f, 1.0f, 1.0f);
shader.Bind();
mesh.Draw();
display.Update();
}
return 0;
}
Mesh.cpp:
#include "Mesh.h"
Mesh::Mesh(Vertex* vertecies, unsigned int numVertecies)
{
m_drawCount = numVertecies;
glGenVertexArrays(1, &m_vertexArrayObject);
glBindVertexArray(m_vertexArrayObject);
glGenBuffers(NUM_BUFFERS, m_vertexArrayBuffers);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexArrayBuffers[POSITION_VB]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertecies[0]) * numVertecies, vertecies, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const void*)offsetof(Vertex, m_pos));
glBindVertexArray(0);
}
Mesh::~Mesh()
{
glDeleteBuffers(NUM_BUFFERS, m_vertexArrayBuffers);
glDeleteVertexArrays(1, &m_vertexArrayObject);
}
void Mesh::Draw()
{
glBindVertexArray(m_vertexArrayObject);
glDrawArrays(GL_TRIANGLES, 0, m_drawCount);
glBindVertexArray(0);
}
Display.cpp:
#include "Display.h"
Display::Display(int width, int height, string title)
{
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
m_window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, SDL_WINDOW_OPENGL);
m_glContext = SDL_GL_CreateContext(m_window);
m_isClosed = false;
GLenum status = glewInit();
if (status != GLEW_OK)
cerr << "Could not initialize GLEW!" << endl;
}
Display::~Display()
{
SDL_GL_DeleteContext(m_glContext);
SDL_DestroyWindow(m_window);
SDL_Quit();
}
void Display::Clear(float r, float g, float b, float a)
{
glClearColor(r, g, b, a);
glClear(GL_COLOR_BUFFER_BIT);
}
bool Display::IsClosed()
{
return m_isClosed;
}
void Display::Update()
{
SDL_GL_SwapWindow(m_window);
SDL_Event e;
while (SDL_PollEvent(&e))
{
if (e.type == SDL_QUIT)
m_isClosed = true;
}
}
Vertex Shader:
#version 420 core
layout(location = 0) in vec3 position;
void main()
{
gl_Position = vec4(position, 1.0);
}
Fragment Shader:
#version 420 core
out vec4 frag;
void main()
{
frag = vec4(1.0, 0.0, 0.0, 1.0);
}
Vertex.h:
#pragma once
#include <glm\glm.hpp>
using namespace glm;
struct Vertex
{
public:
Vertex(const vec3& pos);
virtual ~Vertex();
vec3 m_pos;
};
Vertex.cpp:
#include "Vertex.h"
Vertex::Vertex(const vec3& pos)
{
m_pos = pos;
}
Vertex::~Vertex()
{
}
EDIT: Everything has now been fixed.
This is probably a data alignment issue where your Vertex class is being padded. OpenGL would then interpret the padding bytes as valid data.
You can verify this by printing the result of sizeof(Vertex) which would be 8 (you mention a 64 platform) if it is indeed padded.
This tells OpenGL that there are floats tightly packed in memory, without padding:
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
A better way of setting the vertex pointer would be:
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const void*)offsetof(Vertex, m_pos));
This also enables you to easily add more attributes.

Using a VBO to draw lines from a vector of points in OpenGL

I have a simple OpenGL program which I am trying to utilize Vertex Buffer Objects for rendering instead of the old glBegin() - glEnd(). Basically the user clicks on the window indicating a starting point, and then presses a key to generate subsequent points which OpenGL draws as a line.
I've implemented this using glBegin() and glEnd() but have not been successful using a VBO. I am wondering if the problem is that after I initialize the VBO, I'm adding more vertices which it doesn't have memory allocated for, and thus doesn't display them.
Edit: Also, I'm a bit confused as to how it knows exactly which values in the vertex struct to use for x and y, as well as for r, g, b. I haven't been able to find a clear example of this.
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
#include <Math.h>
#include <iostream>
#include <vector>
#include <GL/glew.h>
#include <GL/glut.h>
struct vertex {
float x, y, u, v, r, g, b;
};
const int D = 10; // distance
const int A = 10; // angle
const int WINDOW_WIDTH = 500, WINDOW_HEIGHT = 500;
std::vector<vertex> vertices;
boolean start = false;
GLuint vboId;
void update_line_point() {
vertex temp;
temp.x = vertices.back().x + D * vertices.back().u;
temp.y = vertices.back().y + D * vertices.back().v;
temp.u = vertices.back().u;
temp.v = vertices.back().v;
vertices.push_back(temp);
}
void update_line_angle() {
float u_prime, v_prime;
u_prime = vertices.back().u * cos(A) - vertices.back().v * sin(A);
v_prime = vertices.back().u * sin(A) + vertices.back().v * cos(A);
vertices.back().u = u_prime;
vertices.back().v = v_prime;
}
void initVertexBuffer() {
glGenBuffers(1, &vboId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void displayCB() {
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WINDOW_WIDTH, 0, WINDOW_HEIGHT);
if (start) {
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(vertex), &vertices[0]);
glColorPointer(3, GL_FLOAT, sizeof(vertex), &vertices[0]);
glDrawArrays(GL_LINE_STRIP, 0, vertices.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
/***** this is what I'm trying to achieve
glColor3f(1, 0, 0);
glBegin(GL_LINE_STRIP);
for (std::vector<vertex>::size_type i = 0; i < vertices.size(); i++) {
glVertex2f(vertices[i].x, vertices[i].y);
}
glEnd();
*****/
glFlush();
glutSwapBuffers();
}
void mouseCB(int button, int state, int x, int y) {
if (state == GLUT_DOWN) {
vertices.clear();
vertex temp = {x, WINDOW_HEIGHT - y, 1, 0, 1, 0, 0}; // default red color
vertices.push_back(temp);
start = true;
initVertexBuffer();
}
glutPostRedisplay();
}
void keyboardCB(unsigned char key, int x, int y) {
switch(key) {
case 'f':
if (start) {
update_line_point();
}
break;
case 't':
if (start) {
update_line_angle();
}
break;
}
glutPostRedisplay();
}
void initCallbackFunc() {
glutDisplayFunc(displayCB);
glutMouseFunc(mouseCB);
glutKeyboardFunc(keyboardCB);
}
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT);
glutInitWindowPosition(100, 100);
glutCreateWindow("Test");
initCallbackFunc();
// initialize glew
GLenum glewInitResult;
glewExperimental = GL_TRUE;
glewInitResult = glewInit();
if (GLEW_OK != glewInitResult) {
std::cerr << "Error initializing glew." << std::endl;
return 1;
}
glClearColor(1, 1, 1, 0);
glutMainLoop();
return 0;
}
If you have a VBO bound then the pointer argument to the gl*Pointer() calls is interpreted as a byte offset from the beginning of the VBO, not an actual pointer. Your usage is consistent with vertex array usage though.
So for your vertex struct x starts at byte zero and r starts at byte sizeof(float) * 4.
Also, your mouse callback reset your vertex vector on every call so you would never be able have more than one vertex in it at any given time. It also leaked VBO names via the glGenBuffers() in initVertexBuffer().
Give this a shot:
#include <GL/glew.h>
#include <GL/glut.h>
#include <iostream>
#include <vector>
struct vertex
{
float x, y;
float u, v;
float r, g, b;
};
GLuint vboId;
std::vector<vertex> vertices;
void mouseCB(int button, int state, int x, int y)
{
y = glutGet( GLUT_WINDOW_HEIGHT ) - y;
if (state == GLUT_DOWN)
{
vertex temp = {x, y, 1, 0, 1, 0, 0}; // default red color
vertices.push_back(temp);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glutPostRedisplay();
}
void displayCB()
{
glClearColor(1, 1, 1, 0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
double w = glutGet( GLUT_WINDOW_WIDTH );
double h = glutGet( GLUT_WINDOW_HEIGHT );
glOrtho( 0, w, 0, h, -1, 1 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
if ( vertices.size() > 1 )
{
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(vertex), (void*)(sizeof( float ) * 0));
glColorPointer(3, GL_FLOAT, sizeof(vertex), (void*)(sizeof( float ) * 4));
glDrawArrays(GL_LINE_STRIP, 0, vertices.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
glutCreateWindow("Test");
// initialize glew
glewExperimental = GL_TRUE;
GLenum glewInitResult = glewInit();
if (GLEW_OK != glewInitResult) {
std::cerr << "Error initializing glew." << std::endl;
return 1;
}
glGenBuffers(1, &vboId);
glutDisplayFunc(displayCB);
glutMouseFunc(mouseCB);
glutMainLoop();
return 0;
}
A VBO is a buffer located somewhere in memory (almost always in dedicated GPU memory - VRAM) of a fixed size. You specify this size in glBufferData, and you also simultaneously give the GL a pointer to copy from. The key word here is copy. Everything you do to the vector after glBufferData isn't reflected in the VBO.
You should be binding and doing another glBufferData call after changing the vector. You will also probably get better performance from glBufferSubData or glMapBuffer if the VBO is already large enough to handle the new data, but in a small application like this the performance hit of calling glBufferData every time is basically non-existent.
Also, to address your other question about the values you need to pick out x, y, etc. The way your VBO is set up is that the values are interleaved. so in memory, your vertices will look like this:
+-------------------------------------------------
| x | y | u | v | r | g | b | x | y | u | v | ...
+-------------------------------------------------
You tell OpenGL where your vertices and colors are with the glVertexPointer and glColorPointer functions respectively.
The size parameter specifies how many elements there are for each vertex. In this case, it's 2 for vertices, and 3 for colors.
The type parameter specifies what type each element is. In your case it's GL_FLOAT for both.
The stride parameter is how many bytes you need to skip from the start of one vertex to the start of the next. With an interleaved setup like yours, this is simply sizeof(vertex) for both.
The last parameter, pointer, isn't actually a pointer to your vector in this case. When a VBO is bound, pointer becomes a byte offset into the VBO. For vertices, this should be 0, since the first vertex starts at the very first byte of the VBO. For colors, this should be 4 * sizeof(float), since the first color is preceded by 4 floats.