I am following Cherno's brilliant series on OpenGL, and I have encountered a problem. I have moved on from using a vertex buffer only, to now using a vertex buffer together with an index buffer.
What I want to happen, is for my program to draw two triangles, using the given positions and indices, however when I run my program I only get a black screen. My shaders are working fine when drawing only from a vertex buffer, but introducing the index buffer makes it fail. Here is the relevant parts of code:
float positions[] {
-0.5, -0.5,
0.5, -0.5,
0.5, 0.5,
-0.5, 0.5
};
unsigned int indices[] {
0, 1, 2,
2, 3, 0
};
unsigned int VBO;
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, 4*2*sizeof(float), positions, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, 0);
unsigned int IBO;
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6*sizeof(unsigned int), indices, GL_STATIC_DRAW);
ShaderProgramSource source = parseShader("res/shaders/Basic.glsl");
unsigned int shader = createShader(source.vertexSource, source.fragmentSource);
glUseProgram(shader);
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
glClear(GL_COLOR_BUFFER_BIT);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
I am pretty much sure that my code is equal to that of Cherno, but he gets a nice looking square on screen whereas I get nothing. Can you spot an error?
Here's some info on my system:
macOS 12.2.1
OpenGL Version 4.1
GLSL Version 3.3
Writing and compiling in Xcode
Static linking to GLEW and GLFW
Unlike using Linux or Windows, a Compatibility profile OpenGL Context is not supported on a Mac. You must use a Core profile OpenGL Context. If you use a Core profile, you must create a Vertex Array Object because a core profile does not have a default Vertex Array Object.
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, 0);
Related
I currently have an OpenGL project in which I am using GLFW for the window and context creation, and GLAD for loading OpenGL functions. The GLAD version I am using is OpenGL 4.6, compatibility profile, with all extensions (including ARB_direct_state_access).
My current graphics card settings are
OpenGL Version: 4.6.0 NVIDIA 457.09
GLSL Version: 4.60 NVIDIA
Renderer: GeForce GTX 970/PCIe/SSE2
Vendor: NVIDIA Corporation
When I run the following non-DSA code, it works fine.
// Create vertex array object and bind it
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Create an index buffer object and use the data in the indices vector
GLuint ibo;
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicies.size()*sizeof(GLint), indicies.data(), GL_STATIC_DRAW);
// Create a array buffer object and use the positional data which has x,y,z components
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, positions.size()*sizeof(GLfloat), positions.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
However when I try to translate this code to a DSA format and run it, the program opens a window and then terminates without and useful debug information.
GLuint vao;
glGenVertexArrays(1, &vao);
GLuint vbo;
glCreateBuffers(1, &vbo);
glNamedBufferStorage(vbo, positions.size()*sizeof(GLfloat), positions.data(), GL_DYNAMIC_STORAGE_BIT);
glVertexArrayVertexBuffer(vao, 0, vbo, 0, 0);
glEnableVertexArrayAttrib(vao, 0);
glVertexArrayAttribFormat(vao, 0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao, 0, 0);
GLuint ibo;
glCreateBuffers(1, &ibo);
glNamedBufferStorage(ibo, sizeof(GLint)*indicies.size(), indicies.data(), GL_DYNAMIC_STORAGE_BIT);
glVertexArrayElementBuffer(vao, ibo);
In both cases I bind the Vertex Array Object before drawing like so
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, indicies.size(), GL_UNSIGNED_INT, 0);
Why is my DSA like code not working?
When you use glVertexArrayVertexBuffer you must specify the stride argument. The special case in which the generic vertex attributes are understood as tightly packed when the stride is 0, as when using glVertexAttribPointer, does not apply when using glVertexArrayVertexBuffer.
glVertexArrayVertexBuffer(vao, 0, vbo, 0, 0);
glVertexArrayVertexBuffer(vao, 0, vbo, 0, 3*sizeof(float));
It won't cause an error if stride is 0, but that doesn't mean it makes sense.
I'm not very experience with the OpenGL library so I'm having trouble understanding why when I move some initialization code to a class or a function, GL stops drawing onto the screen. Some research indicates that the library is "global" or state-based rather than object based?
Anyway, here is some code that works
GLuint vertexArrayBuffer;
glGenVertexArrays(1, &vertexArrayBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffer);
// VBO is ready to accept vertex data
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBindVertexArray(0);
while(!screen.isClosed()) {
// Give the screen a background color
screen.paint(0.0f, 0.0f, 0.5f, 1.0f);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffer);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
// Switch to display buffer after drawing all of the above
screen.swapBuffers();
This is all enclosed in the main function, with not much programming structure. The output is a nice white triangle onto a blueish background.
This is the issue here, taking the exact code prior to the event loop and wrapping it into a function:
GLuint initVertexArray(vertex vertices[]) {
// Create file descriptor for the VBO for use as reference to gl vertex functions
GLuint vertexArrayBuffer;
glGenVertexArrays(1, &vertexArrayBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffer);
// VBO is ready to accept vertex data
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBindVertexArray(0);
return vertexArrayBuffer;
}
and calling it GLuint vertexArrayBuffer = initVertexArray(vertices); in the main function, produces no output of any kind, no errors either, just the same blueish background.
Have you checked what sizeof(vertices) is returning. In this case vertices[] will decay into a pointer so I would imagine that sizeof(vertices) is sizeof(vertex*).
Try passing the size of the array alongside it like so:
GLuint initVertexArray(vertex vertices[], const unsigned int size);
Then you would use it like so:
glBufferData(GL_ARRAY_BUFFER, size, vertices, GL_STATIC_DRAW);
instead of:
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
You would then call it in the same scope as you declared your vertices array:
vertex vertices[100];
// sizeof(vertices) here will give the actual size of the vertices array
// eg: sizeof(vertex) * 100 instead of just giving sizeof(vertex*)
GLuint vertexArrayBuffer = initVertexArray(vertices, sizeof(vertices));
I'm learning opengl and currently I'm struggeling with VAOs.
I would like to draw a cube and a triangle using VAOs but unfortunately, only the object that I create later is drawn. This is what I do in the main loop:
void main()
{
//loading shader, generate window, etc ...
//generate a cube:
GLuint cube_vao = generateCube();
//next, generate a triangle:
GLuint triangle_vao = generateTriangle();
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
do
{
//draw:
glBindVertexArray(triangle_vao);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(cube_vao);
glDrawArrays(GL_TRIANGLES, 0, 12*3);
glfwPollEvents();
glfwSwapBuffers(window);
} while (glfwGetKey(window, GLFW_KEY_ESCAPE) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0);
}
Both, generateCube() and generateTriangle() do basically the same thing: create the vertices, create vbo, create vao and set the attributes. Then they return the vao id.
This is generateTriangle() for example:
generateTriangle()
{
//generate the vertex positions:
GLfloat triangle_pos[] = //not part of the snippet -> too long
//generate vbo for the positions:
GLuint pos_vbo;
glGenBuffers(1, &pos_vbo);
glBindBuffer(GL_ARRAY_BUFFER, pos_vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangle_pos), triangle_pos, GL_STATIC_DRAW);
//next, generate the vertex colors:
GLfloat triangle_color[] = //not part of the snippet -> too long
//generate vbo for the colors:
GLuint col_vbo;
glGenBuffers(1, &col_vbo);
glBindBuffer(GL_ARRAY_BUFFER, col_vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangle_color), triangle_color, GL_STATIC_DRAW);
//generate VAO:
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
GLint pos_attrib_id = glGetAttribLocation(programID, "line_pos");
glEnableVertexAttribArray(pos_attrib_id);
glBindBuffer(GL_ARRAY_BUFFER, pos_vbo);
glVertexAttribPointer(pos_attrib_id, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
GLint col_attrib_id = glGetAttribLocation(programID, "color");
glEnableVertexAttribArray(col_attrib_id);
glBindBuffer(GL_ARRAY_BUFFER, col_vbo);
glVertexAttribPointer(col_attrib_id, 4, GL_FLOAT, GL_FALSE, 0, (void*)0);
//function to set the perspective (argument is the model matrix)
setPerspective(glm::mat4(1.0f));
return vao;
}
With this code, only the cube gets drawn.
Furthermore, if I comment out the lines:
glBindVertexArray(cube_vao); and glDrawArrays(GL_TRIANGLES, 0, 12*3); in the main, the triangle gets drawn but has the color and position of the cube, this is driving me crazy.
I'm using OSX with the shader versions 120 if that helps.
VAOs were introduced as standard functionality in OpenGL 3.0. On Mac OS, the default context version is 2.1. So you will need to specifically request a 3.x context during setup.
The exact mechanics of getting a 3.x context will depend on the window system interface/toolkit you are using. For example in GLUT, you include the GLUT_3_2_CORE_PROFILE flag in the argument to glInitDisplayMode(). With Cocoa, you include NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core in the pixel format attributes.
Note that Mac OS only supports the Core Profile for 3.x and later contexts. So you will not be able to use deprecated functionality anymore.
During every call to glDrawElements, my graphics driver crashes/freezes and recovers after a few seconds (Windows 7 Timeout Detection/Recovery). glGetError() always returns GL_NO_ERROR.
Edit: Just to be clear about what exactly happens: the first time glDrawElements is called, my computer freezes for 5-10 seconds, then the screen goes black for a few more seconds, then it recovers and Windows gives me a message: "Display driver stopped responding and has recovered". My program keeps running, but it's stuck in glDrawElements.
Update: Adding a call to glFlush() just before glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) makes it work. I don't understand why.
Here's my code, somewhat simplified and without error checking. Am I doing anything wrong?
struct Vertex
{
float pos[3];
float color[3];
};
struct Tri
{
unsigned int idxs[3];
};
// ...
GLuint m_vbo;
GLuint m_ibo;
std::vector<Vertex> m_verts;
std::vector<Tri> m_faces;
// ...
glGenBuffers(1, &m_vbo);
Vertex* vBuf = &(m_verts[0]);
unsigned int vboSize = sizeof(vBuf[0]) * m_verts.size();
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, vboSize, vBuf, GL_STATIC_DRAW);
glGenBuffers(1, &m_ibo);
unsigned int* iBuf = (unsigned int*) (&(m_faces[0]));
unsigned int iboSize = sizeof(iBuf[0]) * (m_faces.size() * 3);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, iboSize, iBuf, GL_STATIC_DRAW);
// ...
// this line fixes it
// glFlush();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(someShaderProgram);
// ...
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_ibo);
// the attribute locations are queried using glGetAttribLocation in the real code
glEnableVertexAttribArray(1);
glVertexAttribPointer
(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)(0));
glEnableVertexAttribArray(0);
glVertexAttribPointer
(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)(3*sizeof(float)));
// doesn't get past this line
glDrawElements(GL_TRIANGLES, m_faces.size()*3, GL_UNSIGNED_INT, 0);
// ...
glfwSwapBuffers();
The last argument for glVertexAttribPointer is "a offset of the first component" - in byte(!). Are you sure you want the value 3 there?
My Vertex Buffer Object code is supposed to render textures nicely but instead the textures are being rendered oddly with some triangle shapes.
What happens - http://godofgod.co.uk/my_files/wrong.png
What is supposed to happen - http://godofgod.co.uk/my_files/right.png
This function creates the VBO and sets the vertex and texture coordinate data:
extern "C" GLuint create_box_vbo(GLdouble size[2]){
GLuint vbo;
glGenBuffers(1,&vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
GLsizeiptr data_size = 8*sizeof(GLdouble);
GLdouble vertices[] = {0,0, 0,size[1], size[0],0, size[0],size[1]};
glBufferData(GL_ARRAY_BUFFER, data_size, vertices, GL_STATIC_DRAW);
data_size = 8*sizeof(GLint);
GLint textcoords[] = {0,0, 0,1, 1,0, 1,1};
glBufferData(GL_ARRAY_BUFFER, data_size, textcoords, GL_STATIC_DRAW);
return vbo;
}
Here is some relavant code from another function which is supposed to draw the textures with the VBO.
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor4d(1,1,1,a/255);
glBindTexture(GL_TEXTURE_2D, texture);
glTranslated(offset[0],offset[1],0);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexPointer(2, GL_DOUBLE, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer (2, GL_INT, 0, 0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDrawArrays(GL_TRIANGLES, 1, 3);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
I would have hoped for the code to use the first three coordinates (top-left,bottom-left,top-right) and the last three (bottom-left,top-right,bottom-right) to draw the triangles with the texture data correctly in the most efficient way. I don't see why triangles should make it more efficient but apparently that's the way to go. It, of-course, fails for some reason.
I am asking what is broken but also am I going about it in the right way generally?
Thank you.
If you want to use the one VBO for both vertex and texture coordinates you need to group them using a struct.
Define your data:
typedef struct {
GLdouble x, y;
GLint s, t;
} VertexData;
VertexData data[] = {
// x y s t
{0.0, 0.0, 0, 0},
{0.0, size[1], 0, 1},
{size[0], 0.0, 1, 0},
{size[0], size[1], 1, 1}
};
Copy it into VBO:
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(data), (GLvoid*)data, GL_STATIC_DRAW);
Set pointers. Note that stride is your struct's size and pointer itself serves as offset:
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexPointer(2, GL_DOUBLE, sizeof(VertexData), (GLvoid*)offsetof(VertexData, x));
glTexCoordPointer(2, GL_INT, sizeof(VertexData), (GLvoid*)offsetof(VertexData, s));
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
And draw.
EDIT: Implemented offset with offsetof() as Bahbar suggested.
You're loading data twice to the vbo. The second call to glBufferData is replacing the first one. Then both calls to gl*Pointer actually use the same data when calling draw.