OpenGL Draw Points with indeces - opengl

I was trying to draw a single point at 0,0 but my screen turns white and stops responding.
Can someone look into the code and see where I am making a mistake?
float *vertices;
GLubyte *pindices;
int width, height;
GLuint vboHandlePoints[1];
GLuint indexVBOPoints;
int numberOfPoints;
GLuint buf;
void initVboPoints(){
GLenum err = glewInit();
if (err != GLEW_OK)
{
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
}
glGenBuffers(1, &vboHandlePoints[0]); // create a VBO handle
glBindBuffer(GL_ARRAY_BUFFER, vboHandlePoints[0]); // bind the handle to the current VBO
glBufferData(GL_ARRAY_BUFFER, 1 * 2 * 4, vertices, GL_STATIC_DRAW); // allocate space and copy the data over
glBindBuffer(GL_ARRAY_BUFFER, 0); // clean up
glGenBuffers(1, &indexVBOPoints);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexVBOPoints);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLubyte)*1 * 2, pindices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); //clean up
}
void display(){
glClearColor(0, 1, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glColor4f(0, 0, 0, 1);
glPointSize(5);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vboHandlePoints[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexVBOPoints);
glVertexPointer(2, GL_FLOAT, 0, 0);// 2-dimension
glDrawElements(GL_POINTS, 1, GL_UNSIGNED_BYTE, (char*)NULL + 0);
}
void initializeGlut(int argc, char** argv){
glutInit(&argc, argv);
width = 400;
height = 400;
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutInitWindowSize(width, height);
glutCreateWindow("Bhavya's Program");
glutDisplayFunc(display);
}
void initNormal(){
vertices = new float[2];
pindices = new GLubyte[1];
vertices[0] = 0;
vertices[1] = 0;
pindices[0] = 0;
}
int main(int argc, char** argv){
initNormal();
initializeGlut(argc, argv);
initVboPoints();
glutMainLoop();
return 0;
}

You have no GL_ARRAY_BUFFER bound at the time of the glVertexPointer call. All of the gl*Pointer functions reference the currently bound GL_ARRAY_BUFFER, the buffer name becomes part of the pointer. That way, you can source each attribute from a different VBO. The GL_ARRAY_BUFFER binding is totally irrelevant for the glDraw* family of functions.
Since you use deprecated legacy GL, binding VBO 0 is still valid, and your pointer references the client memory. So your application is likely to just crash because you told the GL to read the data at memory address 0...

The problem is that glFlush(); is required

Related

gl_deletebuffers not working in a separate thread

In my rendering application if i run the render function in the main loop than everything works fine but if i take the rendering function to another thread than the destructor of the objects is not able to release the buffer.
when any object gets destroyed
The destructor for the objects are called but it seems as if gl_deletebuffers are not able to release the buffer.
How i came to this conclusion
1) when i run everthing in the main loop and if i create a object and the VAO number for the object is 1
2) after destroying the object the next object VAO is also assigned number 1.
///////////////////////////////////////////////////////////////////////////////////////////////////////////
1) But when Rendering part goes to a seprate thread than VAO number keeps on incrementing with every object
2) System Ram memory also keeps increasing and when i close the application than only the memory is released.
3) Destructor for objects is definitely called when i delete a object but it seems as if destructor has not been able to release the buffer.
//#define GLEW_STATIC
#include <gl\glew.h>
#include <glfw3.h>
#include "TreeModel.h"
#include "ui_WavefrontRenderer.h"
#include <QtWidgets/QApplication>
#include <QMessageBox>
#include <thread>
#define FPS_WANTED 60
const double limitFPS = 1.0 / 50.0;
Container cont;
const unsigned int SCR_WIDTH = 800;
const unsigned int SCR_HEIGHT = 600;
void framebuffer_size_callback(GLFWwindow* window, int width, int height);
void processInput(GLFWwindow *window);
GLFWwindow* window = nullptr;
void RenderThread(WavefrontRenderer* w)
{
glfwMakeContextCurrent(window);
GLenum GlewInitResult;
glewExperimental = GL_TRUE;
GlewInitResult = glewInit();
if (GLEW_OK != GlewInitResult) // Check if glew is initialized properly
{
QMessageBox msgBox;
msgBox.setText("Not able to Initialize Glew");
msgBox.exec();
glfwTerminate();
}
if (window == NULL)
{
QMessageBox msgBox;
msgBox.setText("Not able to create GL Window");
msgBox.exec();
glfwTerminate();
//return -1;
}
w->InitData();
glEnable(GL_MULTISAMPLE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
while (!glfwWindowShouldClose(window))
{
// input
// -----
processInput(window);
// - Measure time
glClearColor(0.3, 0.3, 0.3, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
w->render(); // DO the Rendering
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
std::terminate();
}
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
cont.SetName("RootItem");
TreeModel* model = new TreeModel("RootElement", &cont);
WavefrontRenderer w(model);
w.show();
glfwInit();
glfwWindowHint(GLFW_RESIZABLE, GL_TRUE);
glfwWindowHint(GLFW_SAMPLES, 4);
window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "Renderer", nullptr, nullptr); // Create the render
window
glfwMakeContextCurrent(0);
std::thread renderThread(RenderThread, &w);
renderThread.detach();
return a.exec();
return 0;
}
Class defination for a Object
the render function w->render() calls the draw() function of a object.
The Base class has a virtual destructor.
#include "Triangle.h"
#include "qdebug.h"
#include "qmessagebox.h"
float verticesTriangle[] = {
-50.0f, -50.0f, 0.0f, 0.0f , 0.0f,1.0f ,0.0f, 0.0f,
50.0f, -50.0f, 0.0f, 0.0f , 0.0f,1.0f ,1.0f, 0.0f,
0.0f, 50.0f, 0.0f, 0.0f, 0.0f,1.0f ,0.5f, 1.0f
};
Triangle::Triangle() : Geometry("TRIANGLE", true)
{
this->isInited = 0;
this->m_VBO = 0;
this->m_VAO = 0;
this->iNumsToDraw = 0;
this->isChanged = true;
}
Triangle::Triangle(const Triangle& triangle) : Geometry( triangle )
{
CleanUp();
this->isInited = 0;
this->m_VBO = 0;
this->m_VAO = 0;
this->iNumsToDraw = triangle.iNumsToDraw;
this->isChanged = true;
this->shader = ResourceManager::GetShader("BasicShader");
iEntries = 3;
}
Triangle& Triangle::operator=(const Triangle& triangle)
{
CleanUp();
Geometry::operator=(triangle);
this->isInited = 0;
this->m_VBO = 0;
this->m_VAO = 0;
this->iNumsToDraw = triangle.iNumsToDraw;
this->isChanged = true;
this->shader = ResourceManager::GetShader("BasicShader");
return (*this);
}
void Triangle::init()
{
glGenVertexArrays(1, &m_VAO);
glGenBuffers(1, &m_VBO);
glBindVertexArray(m_VAO);
glBindBuffer(GL_ARRAY_BUFFER, m_VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(verticesTriangle), verticesTriangle, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(6 * sizeof(float)));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
isInited = true;
}
void Triangle::CleanUp()
{
if (!this->isInited)
{
return;
}
if (this->m_VAO)
glDeleteVertexArrays(1, &this->m_VAO);
if (this->m_VBO)
glDeleteBuffers(1, &this->m_VBO);
this->isInited = false;
}
void Triangle::draw()
{
if (isChanged)
{
init();
isChanged = false;
}
this->shader.Use();
glBindVertexArray(m_VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
}
Triangle::~Triangle()
{
if (this->m_VAO)
glDeleteVertexArrays(1, &this->m_VAO);
if (this->m_VBO)
glDeleteBuffers(1, &this->m_VBO);
this->isInited = false;
}
OpenGL contexts are thread local state:
Every thread has exactly one or no OpenGL contexts active in it at any given time.
Each OpenGL context must be active in no or exactly one thread at any given time.
OpenGL contexts are not automatically migrated between thread.
I.e. if you don't explicitly unmake current the OpenGL context in question on the threads it's currently active, and subsequently make it active on the thread you're calling glDeleteBuffers on, the call on that will have no effect; on the context you expected it to have an effect on, at least.

OpenGL ES2 black screen (C++, OSX, SDL2)

Hate to make a "find my bug" post, but the internet seems to be severely lacking any examples for ultra-straightforward OpenGL ES2 triangle-drawing.
And technically, this is using OpenGL 2.1, but OpenGL ES2 is just a subset of that... right?
Anyways, here's my code (I stripped logging/error checking where I know it passes)
#include <SDL.h>
#include <SDL_opengl.h>
int main(int argc, char* argv[])
{
SDL_Init( SDL_INIT_VIDEO | SDL_INIT_EVENTS);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION,2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION,1);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL,1);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER,1);
SDL_Window* window = SDL_CreateWindow("Test", 0,0,100,100, SDL_WINDOW_OPENGL);
SDL_GLContext gl = SDL_GL_CreateContext(window);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
GLuint gl_program_id;
GLuint gl_vs_id;
GLuint gl_fs_id;
GLuint gl_pos_attrib_id;
GLuint gl_pos_buff_id;
char vs_file[2048]; //assume I successfully loaded vertex shader code into this array
char *vs_file_p = &vs_file[0];
char fs_file[2048]; //assume I successfully loaded fragment shader code into this array
char *fs_file_p = &fs_file[0];
gl_vs_id = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(gl_vs_id, 1, &vs_file_p, NULL);
glCompileShader(gl_vs_id);
gl_fs_id = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(gl_fs_id, 1, &fs_file_p, NULL);
glCompileShader(gl_fs_id);
gl_program_id = glCreateProgram();
glAttachShader(gl_program_id, gl_vs_id);
glAttachShader(gl_program_id, gl_fs_id);
glLinkProgram(gl_program_id);
glDeleteShader(gl_vs_id);
glDeleteShader(gl_fs_id);
glUseProgram(gl_program_id);
gl_pos_attrib_id = glGetAttribLocation(gl_program_id, "position");
float verts[] = {0.0,0.0,0.5,1.0,1.0,0.0};
glGenBuffers(1, &gl_pos_buff_id);
glBindBuffer(GL_ARRAY_BUFFER, gl_pos_buff_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*2*3, verts, GL_STATIC_DRAW);
glVertexAttribPointer(gl_pos_attrib_id, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(gl_pos_attrib_id);
glViewport(0,0,100,100);
glClearColor(0.,0.,0.,1.);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(gl_program_id);
glBindBuffer(GL_ARRAY_BUFFER, gl_pos_buff_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*2*3, verts, GL_STATIC_DRAW);
glVertexAttribPointer(gl_pos_attrib_id, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(gl_pos_attrib_id);
glDrawArrays(GL_TRIANGLES,0,3*3);
SDL_GL_SwapWindow(window);
//while loop waiting on input to quit goes here...
SDL_GL_DeleteContext(gl);
SDL_Quit();
exit(0);
return 0;
}
And my shaders are as follows:
Vertex:
#version 110
attribute vec2 position;
void main()
{
gl_Position = vec4(position,1.,1.);
}
Fragment:
#version 110
void main()
{
gl_FragColor = vec4(1.);
}
Just another note that all the shaders compile, the program compiles, the context and window creation all work without a hitch. It just ends up with a black 100x100px window. Any help would be greatly appreciated!
(also, if anyone can recommend decent, consistent, straightforward documentation sources on opengl es2 that would be awesome. So much inconsistency with docs between versions/platforms/whatever. thanks!)

texCoordPointer not working in lwjgl

I've been having problems storing texture coordinate points in a VBO, and then telling OpenGL to use it when it's time to render. In the code below, what I should be getting is a nice 16x16 texture on a square I am making using quads. However what I do get is the first top left pixel of the image instead which is red, so I get a big red square. Please tell me what I am doing wrong in great detail.
public void start() {
try {
Display.setDisplayMode(new DisplayMode(800,600));
Display.create();
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(0);
}
// init OpenGL
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, 800, 0, 600, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glLoadIdentity();
//loadTextures();
TextureManager.init();
makeCube();
// init OpenGL here
while (!Display.isCloseRequested()) {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
// render OpenGL here
renderCube();
Display.update();
}
Display.destroy();
}
public static void main(String[] argv) {
Screen screen = new Screen();
screen.start();
}
int cube;
int texture;
private void makeCube() {
FloatBuffer cubeBuffer;
FloatBuffer textureBuffer;
//Tried using 0,0,16,0,16,16,0,16 for textureData did not work.
float[] textureData = new float[]{
0,0,
1,0,
1,1,
0,1};
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texture);
glBufferData(GL_ARRAY_BUFFER, textureBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
float[] cubeData = new float[]{
/*Front Face*/
100, 100,
100 + 200, 100,
100 + 200, 100 + 200,
100, 100 + 200};
cubeBuffer = BufferUtils.createFloatBuffer(cubeData.length);
cubeBuffer.put(cubeData);
cubeBuffer.flip();
cube = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, cube);
glBufferData(GL_ARRAY_BUFFER, cubeBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
private void renderCube(){
TextureManager.texture.bind();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, texture);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, cube);
glVertexPointer(2, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
I believe your problem is in the argument to textureBuffer.put() in this code fragment:
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture is a variable of type int, which has not even been initialized yet. You later use it as a buffer name. The argument should be textureData instead:
textureBuffer.put(textureData);
I normally try to focus on functionality over style when answering questions here, but I can't help it this time: IMHO, texture is a very unfortunate name for a buffer name. It's not only a style and readability question. If you used descriptive names for the variables, you most likely would have spotted this problem immediately.
Say you named the variable for the buffer name bufferId (I call object identifiers "id", even though the official OpenGL terminology is "name"), and the buffer holding the texture coordinates textureCoordBuf. The statement in question would then become:
textureCoordBuf.put(bufferId);
which would jump out as highly suspicious from even a very superficial look at the code.

glDrawElements throw GL_INVALID_VALUE error

I am trying to draw part of my tile image but I am getting GL_INVALID_VALUE error when I call glDrawElements function. There is no problem when I change this function with glDrawArrays. The problem is that the indices count parameter is not negative number.
There is a code:
#define BUFFER_OFFSET(i) ((char *)nullptr + (i))
#define VERTEX_ATTR_PTR(loc, count, member, type) \
glEnableVertexAttribArray(loc); \
glVertexAttribPointer(loc, count, GL_FLOAT, GL_FALSE, sizeof(type), BUFFER_OFFSET(offsetof(struct type, member)))
// ---------- TextRenderer
void TextRenderer::setText(const string& text) {
vector<Vertex2f> vertex_buffer;
vector<GLuint> index_buffer;
GLfloat cursor = 0.f;
FPoint2D cell_size = font->getCellSize();
for (char c : text) {
TILE_ITER iter = font->getCharacter(c);
{
// UV
for (GLuint i = 0; i < 4; ++i) {
TILE_ITER _v = iter + i;
vertex_buffer.push_back( {
{
_v->pos[0] + cursor,
_v->pos[1],
_v->pos[2]
},
{ _v->uv[0], _v->uv[1] }
});
}
// Index
for (GLuint i = 0; i < 6; ++i)
index_buffer.push_back(
Tile::indices[i] + vertex_buffer.size() - 4);
}
cursor += cell_size.X;
}
vertices_count = vertex_buffer.size();
indices_count = index_buffer.size();
glBindVertexArray(vao);
{
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indices);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER,
0,
indices_count * sizeof(GLuint),
&index_buffer[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferSubData(GL_ARRAY_BUFFER,
0,
vertices_count * sizeof(Vertex2f),
&vertex_buffer[0]);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glBindVertexArray(0);
}
void TextRenderer::create() {
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
{
indices = genGLBuffer( {
nullptr,
BUFFER_SIZE / 2 * sizeof(GLuint),
GL_ELEMENT_ARRAY_BUFFER
}, true, GL_DYNAMIC_DRAW);
vbo = genGLBuffer( {
nullptr,
BUFFER_SIZE * sizeof(Vertex2f),
GL_ARRAY_BUFFER
}, true, GL_DYNAMIC_DRAW);
VERTEX_ATTR_PTR(0, 3, pos, Vertex2f); // Vertex
VERTEX_ATTR_PTR(1, 2, uv, Vertex2f); // UV
}
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(0);
setText("DUPA");
}
void TextRenderer::draw(MatrixStack& matrix, GLint) {
static Shader shader(
getFileContents("shaders/text_frag.glsl"),
getFileContents("shaders/text_vert.glsl"),
"");
shader.begin();
shader.setUniform(GL_TEXTURE_2D, "texture", 0,
font->getHandle());
shader.setUniform("matrix.mvp", matrix.vp_matrix * matrix.model);
shader.setUniform("col", col);
{
glBindVertexArray(vao);
//glDrawArrays(GL_LINE_STRIP, 0, vertices_count);
glDrawElements(GL_LINES, indices_count, GL_UNSIGNED_INT,
nullptr);
glBindVertexArray(0);
showGLErrors();
}
shader.end();
}
The problem is with the following (shortened) call sequence in your setText() method:
glBindVertexArray(vao);
{
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indices);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, ...);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
...
}
glBindVertexArray(0);
The binding of the GL_ELEMENT_ARRAY_BUFFER is part of the VAO state. So by making this call while the VAO is bound:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
you're setting the VAO state to have an element array buffer binding of 0. So when you later bind the VAO in your draw() method, you won't have a binding for GL_ELEMENT_ARRAY_BUFFER.
To avoid this, the simplest solution is to just remove that call. If you want to explicitly unbind it because you're worried that having it bound might have undesired side effects on other code, you need to move it after unbinding the VAO:
glBindVertexArray(vao);
{
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indices);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, ...);
...
}
glBindVertexArray(0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
Without seeing the whole code and knowing exact GL version, I will attempt to give you a correct answer.
First, if you're using ES2 then using index type as GL_UNSIGNED_INT is not supported by default, however I don't think that's your problem.
Your actual issue is that element arrays are actually not stored in your VAO object, only vertex data configuration. Therefore glDrawElements will give you this error as it will think no element array is bound and you passed NULL as indices argument to the function.
To solve this, bind the appropriate element array before you call glDrawElements

Using a VBO to draw lines from a vector of points in OpenGL

I have a simple OpenGL program which I am trying to utilize Vertex Buffer Objects for rendering instead of the old glBegin() - glEnd(). Basically the user clicks on the window indicating a starting point, and then presses a key to generate subsequent points which OpenGL draws as a line.
I've implemented this using glBegin() and glEnd() but have not been successful using a VBO. I am wondering if the problem is that after I initialize the VBO, I'm adding more vertices which it doesn't have memory allocated for, and thus doesn't display them.
Edit: Also, I'm a bit confused as to how it knows exactly which values in the vertex struct to use for x and y, as well as for r, g, b. I haven't been able to find a clear example of this.
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
#include <Math.h>
#include <iostream>
#include <vector>
#include <GL/glew.h>
#include <GL/glut.h>
struct vertex {
float x, y, u, v, r, g, b;
};
const int D = 10; // distance
const int A = 10; // angle
const int WINDOW_WIDTH = 500, WINDOW_HEIGHT = 500;
std::vector<vertex> vertices;
boolean start = false;
GLuint vboId;
void update_line_point() {
vertex temp;
temp.x = vertices.back().x + D * vertices.back().u;
temp.y = vertices.back().y + D * vertices.back().v;
temp.u = vertices.back().u;
temp.v = vertices.back().v;
vertices.push_back(temp);
}
void update_line_angle() {
float u_prime, v_prime;
u_prime = vertices.back().u * cos(A) - vertices.back().v * sin(A);
v_prime = vertices.back().u * sin(A) + vertices.back().v * cos(A);
vertices.back().u = u_prime;
vertices.back().v = v_prime;
}
void initVertexBuffer() {
glGenBuffers(1, &vboId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void displayCB() {
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WINDOW_WIDTH, 0, WINDOW_HEIGHT);
if (start) {
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(vertex), &vertices[0]);
glColorPointer(3, GL_FLOAT, sizeof(vertex), &vertices[0]);
glDrawArrays(GL_LINE_STRIP, 0, vertices.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
/***** this is what I'm trying to achieve
glColor3f(1, 0, 0);
glBegin(GL_LINE_STRIP);
for (std::vector<vertex>::size_type i = 0; i < vertices.size(); i++) {
glVertex2f(vertices[i].x, vertices[i].y);
}
glEnd();
*****/
glFlush();
glutSwapBuffers();
}
void mouseCB(int button, int state, int x, int y) {
if (state == GLUT_DOWN) {
vertices.clear();
vertex temp = {x, WINDOW_HEIGHT - y, 1, 0, 1, 0, 0}; // default red color
vertices.push_back(temp);
start = true;
initVertexBuffer();
}
glutPostRedisplay();
}
void keyboardCB(unsigned char key, int x, int y) {
switch(key) {
case 'f':
if (start) {
update_line_point();
}
break;
case 't':
if (start) {
update_line_angle();
}
break;
}
glutPostRedisplay();
}
void initCallbackFunc() {
glutDisplayFunc(displayCB);
glutMouseFunc(mouseCB);
glutKeyboardFunc(keyboardCB);
}
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT);
glutInitWindowPosition(100, 100);
glutCreateWindow("Test");
initCallbackFunc();
// initialize glew
GLenum glewInitResult;
glewExperimental = GL_TRUE;
glewInitResult = glewInit();
if (GLEW_OK != glewInitResult) {
std::cerr << "Error initializing glew." << std::endl;
return 1;
}
glClearColor(1, 1, 1, 0);
glutMainLoop();
return 0;
}
If you have a VBO bound then the pointer argument to the gl*Pointer() calls is interpreted as a byte offset from the beginning of the VBO, not an actual pointer. Your usage is consistent with vertex array usage though.
So for your vertex struct x starts at byte zero and r starts at byte sizeof(float) * 4.
Also, your mouse callback reset your vertex vector on every call so you would never be able have more than one vertex in it at any given time. It also leaked VBO names via the glGenBuffers() in initVertexBuffer().
Give this a shot:
#include <GL/glew.h>
#include <GL/glut.h>
#include <iostream>
#include <vector>
struct vertex
{
float x, y;
float u, v;
float r, g, b;
};
GLuint vboId;
std::vector<vertex> vertices;
void mouseCB(int button, int state, int x, int y)
{
y = glutGet( GLUT_WINDOW_HEIGHT ) - y;
if (state == GLUT_DOWN)
{
vertex temp = {x, y, 1, 0, 1, 0, 0}; // default red color
vertices.push_back(temp);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glutPostRedisplay();
}
void displayCB()
{
glClearColor(1, 1, 1, 0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
double w = glutGet( GLUT_WINDOW_WIDTH );
double h = glutGet( GLUT_WINDOW_HEIGHT );
glOrtho( 0, w, 0, h, -1, 1 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
if ( vertices.size() > 1 )
{
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(vertex), (void*)(sizeof( float ) * 0));
glColorPointer(3, GL_FLOAT, sizeof(vertex), (void*)(sizeof( float ) * 4));
glDrawArrays(GL_LINE_STRIP, 0, vertices.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
glutCreateWindow("Test");
// initialize glew
glewExperimental = GL_TRUE;
GLenum glewInitResult = glewInit();
if (GLEW_OK != glewInitResult) {
std::cerr << "Error initializing glew." << std::endl;
return 1;
}
glGenBuffers(1, &vboId);
glutDisplayFunc(displayCB);
glutMouseFunc(mouseCB);
glutMainLoop();
return 0;
}
A VBO is a buffer located somewhere in memory (almost always in dedicated GPU memory - VRAM) of a fixed size. You specify this size in glBufferData, and you also simultaneously give the GL a pointer to copy from. The key word here is copy. Everything you do to the vector after glBufferData isn't reflected in the VBO.
You should be binding and doing another glBufferData call after changing the vector. You will also probably get better performance from glBufferSubData or glMapBuffer if the VBO is already large enough to handle the new data, but in a small application like this the performance hit of calling glBufferData every time is basically non-existent.
Also, to address your other question about the values you need to pick out x, y, etc. The way your VBO is set up is that the values are interleaved. so in memory, your vertices will look like this:
+-------------------------------------------------
| x | y | u | v | r | g | b | x | y | u | v | ...
+-------------------------------------------------
You tell OpenGL where your vertices and colors are with the glVertexPointer and glColorPointer functions respectively.
The size parameter specifies how many elements there are for each vertex. In this case, it's 2 for vertices, and 3 for colors.
The type parameter specifies what type each element is. In your case it's GL_FLOAT for both.
The stride parameter is how many bytes you need to skip from the start of one vertex to the start of the next. With an interleaved setup like yours, this is simply sizeof(vertex) for both.
The last parameter, pointer, isn't actually a pointer to your vector in this case. When a VBO is bound, pointer becomes a byte offset into the VBO. For vertices, this should be 0, since the first vertex starts at the very first byte of the VBO. For colors, this should be 4 * sizeof(float), since the first color is preceded by 4 floats.