Why the destructor gets called before doing the copy? - c++

I have the following code
class Mesh
{
public:
Mesh();
Mesh(std::vector<Vertex> vertices, std::vector<GLuint> indices);
~Mesh();
void draw(Shader& shader);
private:
std::vector<Vertex> mVertices;
std::vector<GLuint> mIndices;
GLuint mVBO;
GLuint mEBO;
};
Mesh::Mesh(std::vector<Vertex> vertices, std::vector<GLuint> indices)
{
mIndices = indices;
glGenBuffers(1, &mEBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mEBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, mIndices.size() * sizeof(GLuint), &mIndices[0], GL_STATIC_DRAW);
mVertices = vertices;
glGenBuffers(1, &mVBO);
glBindBuffer(GL_ARRAY_BUFFER, mVBO);
glBufferData(GL_ARRAY_BUFFER, mVertices.size() * sizeof(Vertex), &mVertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
computeBoundingBox();
}
Mesh::~Mesh()
{
glDeleteBuffers(1, &mVBO);
glDeleteBuffers(1, &mEBO);
}
void Mesh::draw(Shader& shader)
{
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mEBO);
glBindBuffer(GL_ARRAY_BUFFER, mVBO);
GLuint vpos = glGetAttribLocation(shader.program(), "vPosition");
GLuint vnor = glGetAttribLocation(shader.program(), "vNormal");
glVertexAttribPointer(vpos, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
glVertexAttribPointer(vnor, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)sizeof(Vector));
shader.bind();
glEnableVertexAttribArray(vpos);
glEnableVertexAttribArray(vnor);
glDrawElements(GL_TRIANGLES, mIndices.size(), GL_UNSIGNED_INT, 0);
shader.unbind();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void loadSquare(Mesh& mesh)
{
std::vector<Vertex> vertices;
vertices.push_back(Vertex(Vector(0.5f, 0.5f, 0.f), Vector(1.f, 0.f, 0.f)));
vertices.push_back(Vertex(Vector(-0.5f, 0.5f, 0.f), Vector(0.f, 1.f, 0.f)));
vertices.push_back(Vertex(Vector(-0.5f, -0.5f, 0.f), Vector(0.f, 0.f, 1.f)));
vertices.push_back(Vertex(Vector(0.5f, -0.5f, 0.f), Vector(1.f, 0.f, 1.f)));
std::vector<GLuint> indices;
indices.push_back(0);
indices.push_back(1);
indices.push_back(2);
indices.push_back(0);
indices.push_back(2);
indices.push_back(3);
mesh = Mesh(vertices, indices);
}
int main(int argc, char** argv)
{
// Create opengl context and window
initOGL();
// Create shaders
Shader shader("render.vglsl", "render.fglsl");
Mesh mesh;
loadSquare(mesh);
while (!glfwWindowShouldClose(window))
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
mesh.draw(shader);
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
If I try to run it it just displays a gray image on the window it creates.
After tracking the application with the debugger, when it hits the line mesh = Mesh(vertices, indices) it creates the buffers for OpenGL and copy the vertices and indices std::vectors to the variable mesh that was passed as parameter.
However, it also calls the destructor of the object created by Mesh(vertices,indices) which in turn invalidate the buffers in the OpenGL context, so when then application reaches mesh.draw(shader) the buffers mesh points to are not valid anymore.
Does the move-constructor can help me solve my problem, i.e. avoid the call to the destructor of Mesh(vertices,indices)? Are there any other solutions?

From your source, your are doing the bind to vectors that are immediately destroyed.
Essentially, in your loadSquare function you are creating a mesh and binding it when you write Mesh(vertices, indices); on the right side of the assignment in the last line of loadSquare.
mesh = Mesh(vertices, indices);
Think that line like this:
...
Mesh m1(vertices, indices); // a
mesh= m1; // b
// m1 gets destroyed here.
}
Line (a) creates and binds the mesh.
When you assign it to mesh in line b, mesh.mVertices and mesh.mIndices will get copies of the vectors and mVBO and mEBO will get copies of the bound values.
Think of that line (b) as writing
mesh.mVertices= m1.mVertices; // mesh gets a new vector with the same values
mesh.mIndices= m1.mIndices; // mesh gets a new vector with the same values
mesh.mVBO= m1.mVBO;
mesh.mEBO= m1.mEBO;
At the end of loadSquare() m1 will be destroyed (destructor called).
In your calling function you will end up with mesh containing mVBO and mEBO members bound to the vectors that were destroyed. It contains its own vectors, these have the same values, but these are copies in different memory locations that were never bound.
There are various ways to solve this, e.g. returning the square mesh through pointer. Or writing an assignment operator (google for shallow copy).
But my suggestion would be to create a an empty constructor and an additional fillMesh function like your current cotstructor.
Mesh::Mesh(void); // set mVBO and mEBO to zero.
void Mesh::fillMesh(std::vector<Vertex> vertices, std::vector<GLuint> indices); // same code as your current constructor.
Then rewrite your loadSquare function like this:
void loadSquare(Mesh& mesh)
{
std::vector<Vertex> vertices;
vertices.push_back(Vertex(Vector(0.5f, 0.5f, 0.f), Vector(1.f, 0.f, 0.f)));
vertices.push_back(Vertex(Vector(-0.5f, 0.5f, 0.f), Vector(0.f, 1.f, 0.f)));
vertices.push_back(Vertex(Vector(-0.5f, -0.5f, 0.f), Vector(0.f, 0.f, 1.f)));
vertices.push_back(Vertex(Vector(0.5f, -0.5f, 0.f), Vector(1.f, 0.f, 1.f)));
std::vector<GLuint> indices;
indices.push_back(0);
indices.push_back(1);
indices.push_back(2);
indices.push_back(0);
indices.push_back(2);
indices.push_back(3);
mesh.fillMesh(vertices, indices);
}
Thus the loadSquare will create the vertices and indices and set them into the mesh from the calling function and bind them.
Further notes (for a clean solution):
The destructor should probably also unbind the vector and indices from
GL.
The fillMesh function should probably check if the mesh is
already bound and unbinds the old vectors before setting and binding
the new ones (in case you are calling fillMesh again on an active mesh).
You should probably still write an assignment operator that calls fillMesh:
Mesh::operator=(const Mesh &other); // google shallow copy

The Mesh constructor gets called when you instantiate Mesh(vestices, indices). This is just a temporary object instance. The code is then calling the assignment operator to copy the temporary object instance into the mesh variable. Since you haven't defined operator= [given the code provided], it does a default assignment behavior. Once that assignment is completed the ~Mesh destructor gets called.

You violated rule of three (rule of five) and of course you get issue after doing assignment. On this line:
mesh = Mesh(vertices, indices);
you create a temporary object that destroyed immediately after the statement. So properly implement or prohibit copy ctor and copy assignment operator to resolve the issue. You may want to implement move ctor and move assignment operator as well, especially if you prohibit copying.

Related

Why is my Index Generation Function not correctly building the triangle primitives?

I am trying to code a function which automatically populates a mesh's index vector container. The function should work without issue in theory as it generates the proper indices in their correct order; however, the triangles do not form! Instead, I am left with a single line.
My mesh generation code is supposed to build an octahedron and then render it in the main game loop. The mesh class is shown below in its entirety:
struct vertex
{
glm::vec3 position;
glm::vec3 color;
};
class Mesh
{
public:
GLuint VAO, VBO, EBO;
std::vector <vertex> vtx;
std::vector <glm::vec3> idx;
glm::mat4 modelMatrix = glm::mat4(1.f);
Mesh(glm::vec3 position, glm::vec3 scale)
{
vertexGen(6);
idx = indexGen(6);
modelMatrix = glm::scale(glm::translate(modelMatrix, position), scale);
initMesh();
};
void Render(Shader shaderProgram, Camera camera, bool wireframe)
{
glUseProgram(shaderProgram.ID);
glPatchParameteri(GL_PATCH_VERTICES, 3); // Indicates to the VAO that each group of three vertices is one patch (triangles)
glProgramUniformMatrix4fv(shaderProgram.ID, 0, 1, GL_FALSE, glm::value_ptr(modelMatrix));
glProgramUniformMatrix4fv(shaderProgram.ID, 1, 1, GL_FALSE, glm::value_ptr(camera.camMatrix));
glProgramUniform3fv(shaderProgram.ID, 2, 1, glm::value_ptr(camera.Position));
glBindVertexArray(VAO); // Binds the VAO to the shader program
if (wireframe)
{
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glDisable(GL_CULL_FACE);
}
else
{
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
//glEnable(GL_CULL_FACE);
}
glDrawElements(GL_PATCHES, idx.size(), GL_UNSIGNED_INT, 0); // Tells the shader program how to draw the primitives
}
private:
void vertexGen(int n) {
// Populate the base six vertices
vtx.push_back(vertex{ glm::vec3( 0.0f, 0.5f, 0.0f), glm::vec3(0.f, 1.f, 0.f) });
vtx.push_back(vertex{ glm::vec3(-0.5f, 0.0f, 0.0f), glm::vec3(0.f, 1.f, 0.f) });
vtx.push_back(vertex{ glm::vec3( 0.0f, 0.0f, -0.5f), glm::vec3(0.f, 1.f, 0.f) });
vtx.push_back(vertex{ glm::vec3( 0.5f, 0.0f, 0.0f), glm::vec3(0.f, 1.f, 0.f) });
vtx.push_back(vertex{ glm::vec3( 0.0f, 0.0f, 0.5f), glm::vec3(0.f, 1.f, 0.f) });
vtx.push_back(vertex{ glm::vec3( 0.0f,-0.5f, 0.0f), glm::vec3(0.f, 1.f, 0.f) });
}
std::vector<glm::vec3> indexGen(int n) {
std::vector<glm::vec3> indices;
// Calculate the indices for the top 4 triangles
indices.push_back(glm::vec3( 0, n - 5, n - 4 ));
indices.push_back(glm::vec3( 0, n - 4, n - 3 ));
indices.push_back(glm::vec3( 0, n - 3, n - 2 ));
indices.push_back(glm::vec3( 0, n - 2, n - 5 ));
// Calculate the indices for the bottom 4 triangles
indices.push_back(glm::vec3( 5, n - 5, n - 4));
indices.push_back(glm::vec3( 5, n - 4, n - 3));
indices.push_back(glm::vec3( 5, n - 3, n - 2));
indices.push_back(glm::vec3( 5, n - 2, n - 5));
return indices;
}
void initMesh()
{
glCreateVertexArrays(1, &VAO); // Sets the address of the uint VAO as the location of a gl vertex array object
glCreateBuffers(1, &VBO); // Sets the address of the uint VBO as the location of a gl buffer object
glCreateBuffers(1, &EBO); // Sets the address of the uint EBO as the location of a gl buffer object
glNamedBufferData(VBO, vtx.size() * sizeof(vtx[0]), vtx.data(), GL_STATIC_DRAW); // Sets the data of the buffer named VBO
glNamedBufferData(EBO, idx.size() * sizeof(idx[0]), idx.data(), GL_STATIC_DRAW); // Sets the data of the buffer named EBO
glEnableVertexArrayAttrib(VAO, 0); // Enables an attribute of the VAO in location 0
glEnableVertexArrayAttrib(VAO, 1); // Enables an attribute of the VAO in location 1
glVertexArrayAttribBinding(VAO, 0, 0); // Layout Location of Position Vectors
glVertexArrayAttribBinding(VAO, 1, 0); // Layout Location of Color Values
glVertexArrayAttribFormat(VAO, 0, 3, GL_FLOAT, GL_FALSE, 0); // Size, and Type of Position Vectors
glVertexArrayAttribFormat(VAO, 1, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat)); // For the Color Values
glVertexArrayVertexBuffer(VAO, 0, VBO, 0, 6 * sizeof(GLfloat)); // Sets the VBO to indicate the start, offset, and stride of vertex data in the VAO
glVertexArrayElementBuffer(VAO, EBO); // Sets the EBO to index the VAO vertex connections
}
};
I took this problem step by step and did all of the basic math on paper. The index generation function returns the expected indices in their correct order as just having the indices written out, but it differs in that the written-out indices generate the desired result whereas the generation function only produces a single line when rendered:
I suspect that the issue lies in my mesh initialization function (initMesh), specifically in the glNamedBufferData or glVertexArrayVertexBuffer, but my knowledge of the functions is very limited. I tried changing the parameter of the glNamedBufferData function to different variations of idx.size()*sizeof(idx[0].x), but that yielded the same results, so I am at a loss. Could someone help me fix this, please?
glm::vec3 is a vector of floats (I think) but you are telling OpenGL to read them as unsigned ints.
Float 0.0 is 0x00000000 (i.e. same as int 0), but float 1.0 is 0x3f800000 (same as int 1065353216). They aren't compatible ways to store numbers. You could try glm::ivec3 which is a vector of ints, but I think most people would use std::vector<int> (or unsigned int) and use 3 entries per triangle.
I think it's okay in this case, but I don't like to use types like ivec3 when I mean to have 3 separate ints isn't always a good practice, because the compiler can insert padding in unexpected places. It's possible that on some platforms, ivec3 could be 3 ints and an extra 4 bytes of padding, making 16 bytes in total, and the extra padding bytes throw off the layout you're relying on. glDrawArrays wouldn't skip over padding after every 3 indices and there would be no way to tell it to do that. It's okay for vertices, since you can tell OpenGL exactly where the data is.

Align a matrix to a vector in OpenGL

I'm trying to visualize normals of triangles.
I have created a triangle to use as the visual representation of the normal but I'm having trouble aligning it to the normal.
I have tried using glm::lookAt but the triangle ends up in some weird position and rotation after that. I am able to move the triangle in the right place with glm::translate though.
Here is my code to create the triangle which is used for the visualization:
// xyz rgb
float vertex_data[] =
{
0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f,
0.25f, 0.0f, 0.025f, 0.0f, 1.0f, 1.0f,
0.25f, 0.0f, -0.025f, 0.0f, 1.0f, 1.0f,
};
unsigned int index_data[] = {0, 1, 2};
glGenVertexArrays(1, &nrmGizmoVAO);
glGenBuffers(1, &nrmGizmoVBO);
glGenBuffers(1, &nrmGizmoEBO);
glBindVertexArray(nrmGizmoVAO);
glBindBuffer(GL_ARRAY_BUFFER, nmrGizmoVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex_data), vertex_data, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, nrmGizmoEBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(index_data), index_data, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
glBindVertexArray(0);
and here is the code to draw the visualizations:
for(unsigned int i = 0; i < worldTriangles->size(); i++)
{
Triangle *tri = &worldTriangles->at(i);
glm::vec3 wp = tri->worldPosition;
glm::vec3 nrm = tri->normal;
nrmGizmoMatrix = glm::mat4(1.0f);
//nrmGizmoMatrix = glm::translate(nrmGizmoMatrix, wp);
nrmGizmoMatrix = glm::lookAt(wp, wp + nrm, glm::vec3(0.0f, 1.0f, 0.0f));
gizmoShader.setMatrix(projectionMatrix, viewMatrix, nrmGizmoMatrix);
glBindVertexArray(nrmGizmoVAO);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
}
When using only glm::translate, the triangles appear in right positions but all point in the same direction. How can I rotate them so that they point in the direction of the normal vector?
Your code doesn't work because lookAt is intended to be used as the view matrix, thus it returns the transform from world space to local (camera) space. In your case you want the reverse -- from local (triangle) to world space. Taking an inverse of lookAt should solve that.
However, I'd take a step back and look at (haha) the bigger picture. What I notice about your approach:
It's very inefficient -- you issue a separate call with a different model matrix for every single normal.
You don't even need the entire model matrix. A triangle is a 2-d shape, so all you need is two basis vectors.
I'd instead generate all the vertices for the normals in a single array, and then use glDrawArrays to draw that. For the actual calculation, observe that we have one degree of freedom when it comes to aligning the triangle along the normal. Your lookAt code resolves that DoF rather arbitrary. A better way to resolve that is to constrain it by requiring that it faces towards the camera, thus maximizing the visible area. The calculation is straightforward:
// inputs: vertices output array, normal position, normal direction, camera position
void emit_normal(std::vector<vec3> &v, const vec3 &p, const vec3 &n, const vec3 &c) {
static const float length = 0.25f, width = 0.025f;
vec3 t = normalize(cross(n, c - p)); // tangent
v.push_back(p);
v.push_back(p + length*n + width*t);
v.push_back(p + length*n - width*t);
}
// ... in your code, generate normals through:
std::vector<vec3> normals;
for(unsigned int i = 0; i < worldTriangles->size(); i++) {
Triangle *tri = &worldTriangles->at(i);
emit_normal(normals, tri->worldPosition, tri->normal, camera_position);
}
// ... create VAO for normals ...
glDrawArrays(GL_TRIANGLES, 0, normals.size());
Note, however, that this would make the normal mesh camera-dependent -- which is desirable when rendering normals with triangles. Most CAD software draws normals with lines instead, which is much simpler and avoids many problems:
void emit_normal(std::vector<vec3> &v, const vec3 &p, const vec3 &n) {
static const float length = 0.25f;
v.push_back(p);
v.push_back(p + length*n);
}
// ... in your code, generate normals through:
std::vector<vec3> normals;
for(unsigned int i = 0; i < worldTriangles->size(); i++) {
Triangle *tri = &worldTriangles->at(i);
emit_normal(normals, tri->worldPosition, tri->normal);
}
// ... create VAO for normals ...
glDrawArrays(GL_LINES, 0, normals.size());

OpenGL: vertexArray vs glBegin()

I am following some tutorial on OpenGL and am running into a problem. I constructed a class called Mesh that takes an array of vertices in its constructor and generates vertexarrays and such to do the drawing. The problem is is that I am not seeing anything. Here is the interface:
class Mesh
{
public:
Mesh(Vertex * vertices, size_t numVertices);
virtual ~Mesh();
void Draw();
private:
enum { POSITION_VB, NUM_BUFFERS };
GLuint m_vertexArrayObject;
GLuint m_vertexArrayBuffers;
size_t m_drawCount;
};
and here are the implementations
#include "mesh.h"
Mesh::Mesh(Vertex *vertices, size_t numVertices)
{
m_drawCount = numVertices;
glGenVertexArrays(1, &m_vertexArrayObject);
glBindVertexArray(m_vertexArrayObject);
glGenBuffers(NUM_BUFFERS, &m_vertexArrayBuffers);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexArrayBuffers);
glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(vertices[0]), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindVertexArray(0);
}
void Mesh::Draw()
{
glBindVertexArray(m_vertexArrayObject);
glDrawArrays(GL_TRIANGLES, 0, m_drawCount);
glBindVertexArray(0);
}
Mesh::~Mesh()
{
glDeleteVertexArrays(1, &m_vertexArrayObject);
}
The Vertex type is a simple class that looks as
class Vertex {
public:
Vertex(glm::vec3 const & pos) { this->pos = pos;}
private:
glm::vec3 pos;
};
If I change the implementation of Mesh::Draw() to
glBegin(GL_TRIANGLES);
glVertex3f(-1.0f, -0.25f, 0.0f); //triangle first vertex
glVertex3f(-0.5f, -0.25f, 0.0f); //triangle second vertex
glVertex3f(-0.75f, 0.25f, 0.0f); //triangle third vertex
glEnd(); //end drawing of triangles
I am getting a triangle printed to the screen. My question is: Does this necessarily mean that there is an error in the implementation of Mesh' member functions, and if so, can anyone spot it? I thought maybe the glBegin method bypasses some error somewhere else in the code that the vertexarray method cannot bypass. I would be grateful for any help. Also, I can post additional code if needed!
The shader code:
#version 120
void main()
{
gl_FragColor = vec4(1.0, 1.0, 0.0, 1.0);
}
More than likely your Mesh classes destructor is to blame:
Mesh::~Mesh()
{
glDeleteVertexArrays(1, &m_vertexArrayObject);
}
You have an implicit copy constructor, which does a byte-for-byte copy of your class's members whenever a copy of your Mesh is necessary. That byte-for-byte copy includes the name (m_vertexArrayObject) of an OpenGL-managed resource, which means you now have two distinct objects referencing the same resource.
As soon as one of these copies goes out of scope, it will delete the VAO that is still referenced by the original object.
The simplest way to solve this problem is to disable the copy constructor (C++11 has new syntax for this), then you will get a compiler error anytime a copy needs to be made.
private:
Mesh (const Mesh& original); // Copy ctor is inaccessible.
If you really do want to support multiple Mesh objects sharing the same Vertex Array Object, you will need to add reference counting and only free the VAO when the reference count reaches 0.

Can't draw things with EBOs

In a C++ application I am writing I am trying to draw a quad using an EBO (element buffer object). Whenever I try to I can't get that quad to draw at all. What am I doing wrong?
code:
//vertices and indices
GLfloat vertices[]={
//position texture coordinate
-0.005f,0.02f,0.0f, 0.0f,1.0f,
0.02f,0.02f,0.0f, 1.0f,1.0f,
0.02f,-0.02f,0.0f, 1.0f,0.0f,
-0.005f,-0.02f,0.0f, 0.0f,0.0f,
};
GLfloat indices[]={
0,1,3,
2,3,1
};
//initialization
glCreateVertexArrays(1,&VAO);
glBindVertexArray(VAO);
glCreateBuffers(1,&VBO);
glCreateBuffers(1,&EBO);
glBindBuffer(GL_ARRAY_BUFFER,VBO);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices),vertices,GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(indices),indices,GL_STATIC_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,5*sizeof(GLfloat),(GLvoid*)nullptr);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,5*sizeof(GLfloat),(GLvoid*)(3*sizeof(GLfloat)));
glEnableVertexAttribArray(1);
glBindVertexArray(0);
//drawing commands
transformLocation=glGetUniformLocation(textureProgram,"transform");
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,woodTexture);
glUseProgram(textureProgram);
glUniformMatrix4fv(transformLocation,1,GL_FALSE,glm::value_ptr(transform));
glBindVertexArray(bowHandleVAO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,bowHandleEBO);
glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_INT,nullptr);
This works with the glDrawArrays equivalent to this, but whenever I try to Use EBOs it won't draw anything. Comment if you need more information.
The most immediate error that I can see is a type mismatch between your indices definitions and usage at calling glDrawElements
Suggestion: Change GLFloat to GLuint, i.e., define your indices as:
GLuint indices[]={ //...
In addition to what Amadeus says about changing your indices array from float to GLuint, you seem to be using the wrong VAO and EBO. In the code you show us you buffer all your data into a buffer object in VAO and indices to EBO, but then when you try to draw you're drawing with bowHandleVAO and bowHandleEBO.

OpenGL 3.3 Batch Rendering - Triangle doesn't show up

I'm trying to implement a batch-rendering system using OpenGL, but the triangle I'm trying to render doesn't show up.
In the constructor of my Renderer-class, I'm initializing the VBO and VAO and also load my shader program (this does work, so the error can't be found here). The VBO is supposed to be capable of holding the maximum amount of vertices I'll permit which is defined in the header to be 30000. The VAO contains the information about how the data that I'll store in that buffer is laid out - in this case I use a struct called VertexData which only contains a 3D-vector ('vertex'), but will also contain stuff like colors etc. later on. So I create the buffer with the size I already stated, don't fill in any content yet and provide the layout using 'glVertexAttribPointer'. The '_vertexCount', as the name implies, counts the amount of vertices currently stored inside that buffer for drawing purposes.
The constructor of my Renderer-class (note that every private member variable defined in the header file starts with an _ ):
Renderer::Renderer(std::string vertexShaderPath, std::string fragmentShaderPath) {
_shaderProgram = ShaderLoader::createProgram(vertexShaderPath, fragmentShaderPath);
glGenBuffers(1, &_vbo);
glGenVertexArrays(1, &_vao);
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
_vertexCount = 0;
}
Once the initization is done, to render anything, the 'begin' procedure has to be called during the main-loop. This gets the current buffer with write permissions to fill in the vertices that should be rendered in the current frame:
void Renderer::begin() {
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
_buffer = (VertexData*) glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
}
After beginning, the 'submit' procedure can be called to add vertices and their corrosponding data to the buffer. I add the data to the location in memory the buffer currently points to, then advance the buffer and increase the vertexcount:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Finally, once all vertices are pushed to the buffer, the 'end' procedure will unmap the buffer to enable the actual rendering of the vertices, bind the VAO, use the shader program, render the provided vertices as triangles, unbind the VAO and reset the vertex count:
void Renderer::end() {
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(_vao);
glUseProgram(_shaderProgram);
glDrawArrays(GL_TRIANGLES, 0, _vertexCount);
glBindVertexArray(0);
_vertexCount = 0;
}
In the main loop I'm beginning the rendering, submitting three vertices to render a simple triangle and ending the rendering process. This is the most important part of that file:
Renderer renderer("../sdr/basicVertex.glsl", "../sdr/basicFragment.glsl");
Renderer::VertexData one;
one.vertex = glm::vec3(-1.0f, 1.0f, 0.0f);
Renderer::VertexData two;
two.vertex = glm::vec3( 1.0f, 1.0f, 0.0f);
Renderer::VertexData three;
three.vertex = glm::vec3( 0.0f,-1.0f, 0.0f);
...
while (running) {
...
renderer.begin();
renderer.submit(&one);
renderer.submit(&two);
renderer.submit(&three);
renderer.end();
SDL_GL_SwapWindow(mainWindow);
}
This may not be the most efficient way of doing this and I'm open for criticism, but my biggest problem is that nothing appears at all. The problem has to lie within those code snippets, but I can't find it - I'm a newbie when it comes to OpenGL, so help is greatly appreciated. If full source code is required, I'll post it using pastebin, but I'm about 99% sure that I did something wrong in those code snippets.
Thank you very much!
You have the vertex attribute disabled when you make the draw call. This part of the setup code looks fine:
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
At this point, the attribute is set up and enabled. But this is followed by:
glDisableVertexAttribArray(0);
Now the attribute is disabled, and there's nothing else in the posted code that enables it again. So when you make the draw call, you don't have a vertex attribute that is actually enabled.
You can simply remove the glDisableVertexAttribArray() call to fix this.
Another problem in your code is the submit() method:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Both _buffer and data are pointers to a VertexData structure. So the assignment:
_buffer = data;
is a pointer assignment. Instead of copying the data into the buffer, it modifies the buffer pointer. This should be:
*_buffer = *data;
This will copy the vertex data into the buffer, and leave the buffer pointer unchanged until you explicitly increment it in the next statement.