Using Vertex Buffer in OpenGL - opengl

I using many polygons (in RAM), this very slowly. Say, please: how using a vertex buffer in OpenGL? (functions etc.)
(programing language - C++)

You should really read a tutorial about this like this one:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/#The_VAO
I can give you this code snippet:
float vertices[] = {
1, 0, 0, 1,
0, 1, 0, 1,
0, 0, 1, 1,
};
GLuint vertexnumber = 3; //Amount of vertices in your array
int VertexStrideSize = 4*sizeof(float); //How much values you give for one vertex
// Create the vertex buffer object
GLuint buf;
glGenBuffers( 1, &buf ); //Create the buffer
glBindBuffer( GL_ARRAY_BUFFER, buf ); //Binding the buffer
glBufferData( GL_ARRAY_BUFFER, VertexStrideSize*vertexnumber, vertices, GL_STATIC_DRAW ); //Fill the buffer
// For your vertex shader
GLuint posLoc = glGetAttribLocation(shadername, "aPosition"); //In your shader you can use the variable aPosition now as a input with "in vec4 aPosition"
glVertexAttribPointer(posLoc, 4, GL_FLOAT, GL_FALSE, VertexStrideSize, 0);
glEnableVertexAttribArray(posLoc);

Related

glDrawArrays bad access

I'm trying to do a simple Pong game, but I'm running into some issues. Essentially, I have an array of four points with an x & y value meant to represent a hardcoded ball, and I need to get that ball to display properly. I keep crashing when I try to use glDrawArray because the four that I'm passing in the last parameter (meant to draw four vertices) is out of bounds. Any idea why?
In my setup:
//put in vertices for ball
//point 1
ballPosArr[0] = 0.1; //x
ballPosArr[1] = 0.1; //y
//pt 2
ballPosArr[2] = -0.1;
ballPosArr[3] = 0.1;
//pt 3
ballPosArr[4] = 0.1;
ballPosArr[5] = -0.1;
//pt 4
ballPosArr[6] = -0.1;
ballPosArr[7] = 0.1;
//ball position buffer
GLuint buffer;
glGenBuffers( 1, &buffer);
glBindBuffer( GL_ARRAY_BUFFER, buffer);
glBufferData( GL_ARRAY_BUFFER, 8*sizeof(GLuint), ballPosArr, GL_STATIC_DRAW );
_buffers.push_back(buffer); //_buffers is a vector of GLuint's
// Initialize the attributes from the vertex shader
GLuint bPos = glGetAttribLocation(_shaderProgram, "ballPosition" );
glEnableVertexAttribArray(bPos);
glVertexAttribPointer(bPos, 2, GL_FLOAT, GL_FALSE, 0, &ballPosArr[0]);
In my display callback:
GLuint bPos = glGetAttribLocation(_shaderProgram, "ballPosition");
glEnableVertexAttribArray(bPos);
//rebind buffers and send data again
//ball position
glBindBuffer(GL_ARRAY_BUFFER, _buffers[0]);
glVertexAttribPointer(bPos, 2, GL_FLOAT, GL_FALSE, 0, &ballPosArr[0]);
glDrawArrays( GL_POLYGON, 0, 4); //bad access error at 4
In my vshader.txt:
attribute vec2 ballPosition;
void main() {
}
If you use VBOs, which you do, the last argument of glVertexAttribPointer is a relative offset into the buffer, not the CPU address of the buffer. In your case, pass 0 for the last argument, since the vertex data you want to use is at the start of the buffer.

OpenGL drawing meshes incorrectly

I'm attempting to make an OpenGL Engine in C++, but cannot render meshes correctly. Meshes, when rendered, create faces that connect two random points on the mesh, or a random point on the mesh with 0,0,0.
The problem can be seen here:
(I made it a wireframe to see the problem more clearly)
Code:
// Render all meshes (Graphics.cpp)
for( int curMesh = 0; curMesh < numMesh; curMesh++ ) {
// Save pointer of buffer
meshes[curMesh]->updatebuf();
Buffer buffer = meshes[curMesh]->buffer;
// Update model matrix
glm::mat4 mvp = Proj*View*(meshes[curMesh]->model);
// Initialize vertex array
glBindBuffer( GL_ARRAY_BUFFER, vertbuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(GLfloat)*buffer.numcoords*3, meshes[curMesh]->verts, GL_STATIC_DRAW );
// Pass information to shader
GLuint posID = glGetAttribLocation( shader, "s_vPosition" );
glVertexAttribPointer( posID, 3, GL_FLOAT, GL_FALSE, 0, (void*)0 );
glEnableVertexAttribArray( posID );
// Check if texture applicable
if( meshes[curMesh]->texID != NULL && meshes[curMesh]->uvs != NULL ) {
// Initialize uv array
glBindBuffer( GL_ARRAY_BUFFER, uvbuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(GLfloat)*buffer.numcoords*2, meshes[curMesh]->uvs, GL_STATIC_DRAW );
// Pass information to shader
GLuint uvID = glGetAttribLocation( shader, "s_vUV" );
glVertexAttribPointer( uvID, 2, GL_FLOAT, GL_FALSE, 0, (void*)(0) );
glEnableVertexAttribArray( uvID );
// Set mesh texture
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, meshes[curMesh]->texID );
GLuint texID = glGetUniformLocation( shader, "Sampler" );
glUniform1i( texID, 0 );
}
// Actiavte shader
glUseProgram( shader );
// Set MVP matrix
GLuint mvpID = glGetUniformLocation( shader, "MVP" );
glUniformMatrix4fv( mvpID, 1, GL_FALSE, &mvp[0][0] );
// Draw verticies on screen
bool wireframe = true;
if( wireframe )
for(int i = 0; i < buffer.numcoords; i += 3)
glDrawArrays(GL_LINE_LOOP, i, 3);
else
glDrawArrays( GL_TRIANGLES, 0, buffer.numcoords );
}
// Mesh Class (Graphics.h)
class mesh {
public:
mesh();
void updatebuf();
Buffer buffer;
GLuint texID;
bool updated;
GLfloat* verts;
GLfloat* uvs;
glm::mat4 model;
};
My Obj loading code is here: https://www.dropbox.com/s/tdcpg4vok11lf9d/ObjReader.txt (It's pretty crude and isn't organized, but should still work)
This looks like a primitive restart issue to me. Hard to tell what exactly is the problem without seeing some code. It would help a lot to see the about 20 lines above and below and including the drawing calls render the teapot. I.e. the 20 lines before the corresponding glDrawArrays, glDrawElements or glBegin call and the 20 lines after.
subtract 1 from the indices for your use, since these are 1-based indices, and you will almost certainly need 0-based indices.
This is because your triangles are not connected for the wireframe to look perfect.
In case triangles is not connected you should construct index buffer.

Why does this vertex buffer object fails to Update?

I have the following pieces of code where I successfully create a vertex buffer object, initialize it with data, and render it using GLSL 4.0. However, when I go to update the data stored in the vertices after animation, OpenGL gives me the error code 0x502 and does not accept my updated vertices information.
Could someone point me in the direction as to why these code does not allow my vertices information to be successfully updated? I should also mention that sometimes, the data is successfully updated with is not always consistent/predictable.
Data Structure used
struct Vertex3{
glm::vec3 vtx; //0
glm::vec3 norm; //3
glm::vec3 tex; //6 Use for texturing or color
};
vector<Vertex3> geometry.vertices3;
Initialization Code
void solidus::Mesh::initVBO(){
geometry.totalVertexCount = geometry.getVertexCount();
// Allocate an OpenGL vertex array object.
glGenVertexArrays(1, &vertexArrayId);
glGenBuffers(2,geometry.vboObjects);
// Bind the vertex array object to store all the buffers and vertex attributes we create here.
glBindVertexArray(vertexArrayId);
glBindBuffer(GL_ARRAY_BUFFER, geometry.vboObjects[VERTEX_DATA]);
//size the size of the total vtx
GLuint byte_size = getTotalSize();
//Reserve the inital space for the vertex data
glBufferData(GL_ARRAY_BUFFER, byte_size, NULL, GL_STREAM_DRAW);
if(geometry.isStructVertex4())
initVBO4( );
else if(geometry.isStructVertex3())
initVBO3( );
else
initVBO2( );
//release
glBindVertexArray(0);
geometry.vertices4.clear();
//geometry.vertices3.clear();
geometry.vertices2.clear();
}
void solidus::Mesh::initVBO3( ){
//getTotalSize() == getVtxCount() * sizeof(Vertex3);
glBufferSubData(GL_ARRAY_BUFFER, 0, getTotalSize(), &geometry.vertices3[0]);
//Note: offsetof -- c++ standard library
//Note: glVertexAttribPointer- first parameter is location of GLSL variable
glEnableVertexAttribArray(0); // Vertex4 position
glVertexAttribPointer( (GLuint)0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3), (GLvoid*)offsetof(Vertex3,vtx) );
// Vertex4 normal
glEnableVertexAttribArray(1);
glVertexAttribPointer( (GLuint)1, 3, GL_FLOAT, GL_TRUE, sizeof(Vertex3), (GLvoid*)offsetof(Vertex3,norm) );
// Texture coords
glEnableVertexAttribArray(2);
glVertexAttribPointer( (GLuint)2, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3),(GLvoid*)offsetof(Vertex3,tex) );
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, geometry.vboObjects[INDEX_DATA]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint)*geometry.indices.size(), &geometry.indices[0], GL_STATIC_DRAW);
}
Update the Mesh Vertex information why does this fail
void solidus::Mesh::uploadVertexGLFx(){
glBindBuffer(GL_ARRAY_BUFFER, geometry.vboObjects[VERTEX_DATA]);
string e0="";
if(geometry.isStructVertex2()){
solidus::GLVBO::setVBOSubData(getTotalSize (), &geometry.vertices2[0]);
e0="Vertex2";
}else if(geometry.isStructVertex3()){
//THIS IS THE POINT OF INTEREST: at least suspected!!!!!
// getVtxCount() * sizeof(Vertex3) = getTotalSize
glBufferSubData(GL_ARRAY_BUFFER, 0, getTotalSize (), &geometry.vertices3[0]);
e0="Vertex3";
}else {
solidus::GLVBO::setVBOSubData(getTotalSize (), &geometry.vertices4[0]);
e0="Vertex4";
}
//report error is glGetError is not equal to 0
postMsg("failed to upload vertex for struct " + e0 , "uploadVertexGLFx",30);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
I modified my updateVertexGLFx function to the code listed below. The main difference with this good is that after I resupplied the vertices information to GL, I informed OpenGL of the pointer offset using gl*AtribPointer. Now the program reliably updates when I call my update function.
void solidus::Mesh::uploadVertexGLFx(){
glBindBuffer(GL_ARRAY_BUFFER, geometry.vboObjects[VERTEX_DATA]);
string e0="";
if(geometry.isStructVertex2()){
solidus::GLVBO::setVBOSubData(getTotalSize (), &geometry.vertices2[0]);
e0="Vertex2";
}else if(geometry.isStructVertex3()){
//glBufferData(GL_ARRAY_BUFFER, getTotalSize (), NULL, GL_STREAM_DRAW);
//THIS IS THE POINT OF INTEREST: at least suspected!!!!!
// getVtxCount() * sizeof(Vertex3) = getTotalSize
cout << "Total Size = " << getTotalSize() <<endl;
cout << "Vtx Count = " << getVtxCount() << endl;
cout << "Sizeof(Vertex3)=" <<sizeof(Vertex3)<<endl;
Vertex3 *f = new Vertex3[getVtxCount()];
for(int i=0; i<getVtxCount();i++){
f[i] = geometry.vertices3[i];
}
glBufferData(GL_ARRAY_BUFFER, getTotalSize(), NULL, GL_STREAM_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, getTotalSize (), f);
//Note: glVertexAttribPointer- first parameter is location of GLSL variable
glEnableVertexAttribArray(0); // Vertex4 position
glVertexAttribPointer( (GLuint)0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3), (GLvoid*)offsetof(Vertex3,vtx) );
// Vertex4 normal
glEnableVertexAttribArray(1);
glVertexAttribPointer( (GLuint)1, 3, GL_FLOAT, GL_TRUE, sizeof(Vertex3), (GLvoid*)offsetof(Vertex3,norm) );
// Texture coords
glEnableVertexAttribArray(2);
glVertexAttribPointer( (GLuint)2, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3),(GLvoid*)offsetof(Vertex3,tex) );
delete f;
f = nullptr;
e0="Vertex3";
}else {
solidus::GLVBO::setVBOSubData(getTotalSize (), &geometry.vertices4[0]);
e0="Vertex4";
}
//report error is glGetError is not equal to 0
postMsg("failed to upload vertex for struct " + e0 , "uploadVertexGLFx",30);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}

glVertexAttribPointer() working only with the first stream

I am trying to use glVertexAttribPointer() to give some data to my vertex shader. The thing is that it's working only with the FIRST attribute...
Here is my OpenGL code:
struct Flag_vertex
{
GLfloat position_1[ 8 ];
GLfloat position_2[ 8 ];
};
Flag_vertex flag_vertex;
... // fill some data to flag_vertex
GLuint vertexbuffer_id;
glGenBuffers( 1, &vertexbuffer_id );
glBindBuffer( GL_ARRAY_BUFFER, vertexbuffer_id );
glBufferData( GL_ARRAY_BUFFER, sizeof(flag_vertex), &flag_vertex, GL_STATIC_DRAW );
glEnableVertexAttribArray( 0 );
glEnableVertexAttribArray( 1 );
glBindBuffer( GL_ARRAY_BUFFER, vertexbuffer_id );
glVertexAttribPointer( 0, 2, GL_FLOAT, GL_FALSE, 0, (void*)offsetof(Flag_vertex, position_1) );
glVertexAttribPointer( 1, 2, GL_FLOAT, GL_FALSE, 0, (void*)offsetof(Flag_vertex, position_2) );
and my shader is something like:
#version 420 core
layout(location = 0) in vec2 in_position_1;
layout(location = 1) in vec2 in_position_2;
out vec2 texcoord;
void main()
{
gl_Position = vec4(in_position_X, 0.0, 1.0);
texcoord = in_position_X * vec2(0.5) + vec2(0.5);
}
If I use "in_position_1" my texture RENDERS PERFECTLY, but if I use in_position_2 nothing happens...
Tip: before link my shaders I am doing:
glBindAttribLocation( programID, 0, "in_position_1");
glBindAttribLocation( programID, 1, "in_position_2");
Why it works only with the first stream? I need more data going to my vertex... I need to send color, etc... any hint?
glVertexAttribPointer( 0, 2, GL_FLOAT, GL_FALSE, 0, (void*)offsetof(Flag_vertex, position_1) );
glVertexAttribPointer( 1, 2, GL_FLOAT, GL_FALSE, 0, (void*)offsetof(Flag_vertex, position_2) );
These lines don't make sense. At least, not with how Flag_vertex is defined. If Flag_vertex is really supposed to be a vertex (and not a quad), then it makes no sense for it to have 8 floats. If each Flag_vertex defines a full quad, then you named it wrong; it's not a vertex at all, it's Flag_quad.
So it's hard to know what you're even trying to accomplish here.
Also:
If I use "in_position_1" my texture RENDERS PERFECTLY, but if I use in_position_2 nothing happens...
Of course it does. Your position data is in attribute 0. Your position data is therefore not in attribute 1. If you pretend attribute 1 has your position data when it clearly doesn't, you will not get reasonable results.
Your problem is that you're always using attribute 0 when you should be using both of them. You shouldn't be picking one or the other. Use in_position_1 for the position and in_position_2 for the texture coordinate. And try to name them reasonably, based on what they do (like position and texture_coord or something). Don't use numbers for them.
Tip: before link my shaders I am doing:
That is the exact same thing as the layout(location=#) setting in the shader. If you want it in the shader, then put it in the shader. If you want it in your OpenGL code, then put it in your OpenGL code. Don't put it in both places.

Why won't my openGL vertex buffer object draw anything?

I am trying to render a model in openGL 4 using the glDrawElements/Arrays function. I am currently reading in files with vertex data and indices for polygons. I cannot get anyting to display on screen though except the axes im using for reference. I have used this general approach a bunch of times with success. I dont know what I could be doing wrong and have been stuck for almost a day.
This is where I buffer my data
// get position of "in vec4 vPosition;" from shader program
// displays axes properly
vPosition = glGetAttribLocation( program, "vPosition" );
// create buffers
glGenVertexArrays(2, vao);
glGenBuffers( 3, vbo );
// here I bind vao[0] and vbo[0] then buffer
// a set of XYZ axes which display correctly
// buffer model
glBindVertexArray( vao[1] );
glBindBuffer( GL_ARRAY_BUFFER, vbo[1] );
glBufferData( GL_ARRAY_BUFFER, sizeof(board->verts),
board->verts, GL_STATIC_DRAW );
glEnableVertexAttribArray( vPosition );
glVertexAttribPointer( vPosition, 4, GL_FLOAT, GL_FALSE, 0,
BUFFER_OFFSET(0));
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER , vbo[2] );
glBufferData(
GL_ELEMENT_ARRAY_BUFFER,
sizeof(board->indices),
board->indices.data(),
GL_STATIC_DRAW);
fyi board is a model of a surfboard in which the vertices and indices for glDrawElements are read in. These vertices and indices are printed out correctly if the following code is included just above the call to glBufferData
for( int i = 0; i < board->numVerts; i++ ){
std::cerr << i << ": "<<board->verts[i].x << " "<<board->verts[i].y<<" "<<
board->verts[i].z << " " << board->verts[i].w << std::endl;
}
the same goes for the indices if a similar print loop is put in. Here is where I attempt to draw the model:
void draw_model(){
glUniform4fv( color_loc, 1, glm::value_ptr(blue));
glBindVertexArray( vao[1] );
glDrawArrays( GL_LINE_STRIP, 0, 600 );
//glDrawElements(
// GL_LINES,
// board->indices.size(),
// GL_UNSIGNED_INT,
// NULL
//);
}
the call to DrawElements (commented out) does not display anything. the call to DrawArrays was a troubleshooting effort to see if I was just using DrawElements incorrectly (i have only used drawArrays in the past). board is defined as model3d *board. here is its class
class model3d{
public:
glm::vec4 *verts;
std::vector<int> indices;
int numVerts;
int numPolys;
model3d( const char*, const char* );
private:
void load_coords( const char* );
void load_polys( const char* );
};
why does my data not seem to buffer?
Don't use sizeof(board->verts) as it'll only return the size of the pointer.