I'm learning opengl and currently I'm struggeling with VAOs.
I would like to draw a cube and a triangle using VAOs but unfortunately, only the object that I create later is drawn. This is what I do in the main loop:
void main()
{
//loading shader, generate window, etc ...
//generate a cube:
GLuint cube_vao = generateCube();
//next, generate a triangle:
GLuint triangle_vao = generateTriangle();
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
do
{
//draw:
glBindVertexArray(triangle_vao);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(cube_vao);
glDrawArrays(GL_TRIANGLES, 0, 12*3);
glfwPollEvents();
glfwSwapBuffers(window);
} while (glfwGetKey(window, GLFW_KEY_ESCAPE) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0);
}
Both, generateCube() and generateTriangle() do basically the same thing: create the vertices, create vbo, create vao and set the attributes. Then they return the vao id.
This is generateTriangle() for example:
generateTriangle()
{
//generate the vertex positions:
GLfloat triangle_pos[] = //not part of the snippet -> too long
//generate vbo for the positions:
GLuint pos_vbo;
glGenBuffers(1, &pos_vbo);
glBindBuffer(GL_ARRAY_BUFFER, pos_vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangle_pos), triangle_pos, GL_STATIC_DRAW);
//next, generate the vertex colors:
GLfloat triangle_color[] = //not part of the snippet -> too long
//generate vbo for the colors:
GLuint col_vbo;
glGenBuffers(1, &col_vbo);
glBindBuffer(GL_ARRAY_BUFFER, col_vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangle_color), triangle_color, GL_STATIC_DRAW);
//generate VAO:
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
GLint pos_attrib_id = glGetAttribLocation(programID, "line_pos");
glEnableVertexAttribArray(pos_attrib_id);
glBindBuffer(GL_ARRAY_BUFFER, pos_vbo);
glVertexAttribPointer(pos_attrib_id, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
GLint col_attrib_id = glGetAttribLocation(programID, "color");
glEnableVertexAttribArray(col_attrib_id);
glBindBuffer(GL_ARRAY_BUFFER, col_vbo);
glVertexAttribPointer(col_attrib_id, 4, GL_FLOAT, GL_FALSE, 0, (void*)0);
//function to set the perspective (argument is the model matrix)
setPerspective(glm::mat4(1.0f));
return vao;
}
With this code, only the cube gets drawn.
Furthermore, if I comment out the lines:
glBindVertexArray(cube_vao); and glDrawArrays(GL_TRIANGLES, 0, 12*3); in the main, the triangle gets drawn but has the color and position of the cube, this is driving me crazy.
I'm using OSX with the shader versions 120 if that helps.
VAOs were introduced as standard functionality in OpenGL 3.0. On Mac OS, the default context version is 2.1. So you will need to specifically request a 3.x context during setup.
The exact mechanics of getting a 3.x context will depend on the window system interface/toolkit you are using. For example in GLUT, you include the GLUT_3_2_CORE_PROFILE flag in the argument to glInitDisplayMode(). With Cocoa, you include NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core in the pixel format attributes.
Note that Mac OS only supports the Core Profile for 3.x and later contexts. So you will not be able to use deprecated functionality anymore.
Related
I am following Cherno's brilliant series on OpenGL, and I have encountered a problem. I have moved on from using a vertex buffer only, to now using a vertex buffer together with an index buffer.
What I want to happen, is for my program to draw two triangles, using the given positions and indices, however when I run my program I only get a black screen. My shaders are working fine when drawing only from a vertex buffer, but introducing the index buffer makes it fail. Here is the relevant parts of code:
float positions[] {
-0.5, -0.5,
0.5, -0.5,
0.5, 0.5,
-0.5, 0.5
};
unsigned int indices[] {
0, 1, 2,
2, 3, 0
};
unsigned int VBO;
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, 4*2*sizeof(float), positions, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, 0);
unsigned int IBO;
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6*sizeof(unsigned int), indices, GL_STATIC_DRAW);
ShaderProgramSource source = parseShader("res/shaders/Basic.glsl");
unsigned int shader = createShader(source.vertexSource, source.fragmentSource);
glUseProgram(shader);
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
glClear(GL_COLOR_BUFFER_BIT);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
I am pretty much sure that my code is equal to that of Cherno, but he gets a nice looking square on screen whereas I get nothing. Can you spot an error?
Here's some info on my system:
macOS 12.2.1
OpenGL Version 4.1
GLSL Version 3.3
Writing and compiling in Xcode
Static linking to GLEW and GLFW
Unlike using Linux or Windows, a Compatibility profile OpenGL Context is not supported on a Mac. You must use a Core profile OpenGL Context. If you use a Core profile, you must create a Vertex Array Object because a core profile does not have a default Vertex Array Object.
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, 0);
I currently have an OpenGL project in which I am using GLFW for the window and context creation, and GLAD for loading OpenGL functions. The GLAD version I am using is OpenGL 4.6, compatibility profile, with all extensions (including ARB_direct_state_access).
My current graphics card settings are
OpenGL Version: 4.6.0 NVIDIA 457.09
GLSL Version: 4.60 NVIDIA
Renderer: GeForce GTX 970/PCIe/SSE2
Vendor: NVIDIA Corporation
When I run the following non-DSA code, it works fine.
// Create vertex array object and bind it
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Create an index buffer object and use the data in the indices vector
GLuint ibo;
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicies.size()*sizeof(GLint), indicies.data(), GL_STATIC_DRAW);
// Create a array buffer object and use the positional data which has x,y,z components
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, positions.size()*sizeof(GLfloat), positions.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
However when I try to translate this code to a DSA format and run it, the program opens a window and then terminates without and useful debug information.
GLuint vao;
glGenVertexArrays(1, &vao);
GLuint vbo;
glCreateBuffers(1, &vbo);
glNamedBufferStorage(vbo, positions.size()*sizeof(GLfloat), positions.data(), GL_DYNAMIC_STORAGE_BIT);
glVertexArrayVertexBuffer(vao, 0, vbo, 0, 0);
glEnableVertexArrayAttrib(vao, 0);
glVertexArrayAttribFormat(vao, 0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao, 0, 0);
GLuint ibo;
glCreateBuffers(1, &ibo);
glNamedBufferStorage(ibo, sizeof(GLint)*indicies.size(), indicies.data(), GL_DYNAMIC_STORAGE_BIT);
glVertexArrayElementBuffer(vao, ibo);
In both cases I bind the Vertex Array Object before drawing like so
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, indicies.size(), GL_UNSIGNED_INT, 0);
Why is my DSA like code not working?
When you use glVertexArrayVertexBuffer you must specify the stride argument. The special case in which the generic vertex attributes are understood as tightly packed when the stride is 0, as when using glVertexAttribPointer, does not apply when using glVertexArrayVertexBuffer.
glVertexArrayVertexBuffer(vao, 0, vbo, 0, 0);
glVertexArrayVertexBuffer(vao, 0, vbo, 0, 3*sizeof(float));
It won't cause an error if stride is 0, but that doesn't mean it makes sense.
Hi so i have been basically pulling my hair out trying to understand this OpenGL confusion
i have tried to find answers in books, in tutorials , and even experimenting around with it
SO basically i have a opengl program that draws the first time my two triangles, however when i try to redraw the first triangle again it doesnt seem to be working
i dont know what information i am missing , but its no t making any sense
as far as i understand once the VAO and VBO have been created and bounded and buffered to memory, and vertex attrib pointers set and enabled that once i bind the vao object that i want to draw as many times as i like, i just have to do that
after initialization which works for me, the problem is that once i rebind another vao object it doesnt seem to draw it
my code is quiet long , i can paste it here if you like, but i think that the drawing part of the code would be sufficient
here it is
GLfloat vec[] = {0.0f, 0.0f,
1.0f, -1.0f,
-1.0f, -1.0f};
GLfloat vec2[] = {0.0f, 1.0f,
1.0f, 0.0f,
-1.0f, 0.0f};
//next step is to upload data to graphics memory
//generating a buffer from openGL
GLuint vbo;
GLuint vbo2 ;
GLuint vao;
GLuint vao2;
glGenBuffers(1, &vbo);
glGenBuffers(1, &vbo2);
glGenVertexArrays(1, &vao);
glGenVertexArrays(1, &vao2);
//to upload the actual data must make the object active by binding it to a target
glBindBuffer(GL_ARRAY_BUFFER, vbo);
//upload the data of active object to memory
glBufferData(GL_ARRAY_BUFFER, sizeof(vec), vec, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vbo2);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec2), vec2, GL_STATIC_DRAW);
//bind and draw
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0,2,GL_FLOAT, GL_FALSE, 0, NULL);
glDrawArrays (GL_TRIANGLES, 0, 3);
glXSwapBuffers ( dpy, glxWin );
sleep(3);
glClear(GL_COLOR_BUFFER_BIT);
//rendering second triangle
glBindBuffer(GL_ARRAY_BUFFER, vbo2);
glBindVertexArray(vao2);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0,2,GL_FLOAT, GL_FALSE, 0, NULL);
glDrawArrays (GL_TRIANGLES, 0, 3);
glXSwapBuffers ( dpy, glxWin );
sleep(3);
//rendering the first triangle again------where the problem lies!!!
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0,2,GL_FLOAT, GL_FALSE, 0, NULL);
glDrawArrays (GL_TRIANGLES, 0, 3);
glXSwapBuffers ( dpy, glxWin );
sleep(3);
You'll also need to clear the depth buffer glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
If you follow a tutorial then you will probably have enabled depth testing in the openGL setup boilerplate it provided. This is because more people will want to use the depth buffer than not.
you can also not call glEnable(GL_DEPTH_TEST); during setup.
I'm having trouble using vertex buffer objects without using a vertex array object.
My understanding was that VAOs are just encapsulating the state around VBOs. But shouldn't the VBOs be usable without a VAO?
Here's a mini-example. With use_vao=true this works correctly (renders orange rect). With use_vao=false this renders nothing and generates a GL_INVALID_OPERATION error upon glDrawElements.
// make sure the modern opengl headers are included before any others
#include <OpenGL/gl3.h>
#define __gl_h_
#include <GLUT/glut.h>
#include <string>
#include <cassert>
// For rendering a full-viewport quad, set tex-coord from position
std::string tex_v_shader = R"(
#version 330 core
in vec3 position;
void main()
{
gl_Position = vec4(position,1.);
}
)";
// Render directly from color or depth texture
std::string tex_f_shader = R"(
#version 330 core
out vec4 color;
void main()
{
color = vec4(0.8,0.4,0.0,0.75);
}
)";
// shader id, vertex array object
GLuint tex_p_id;
int w=1440,h=480;
const GLfloat V[] = {-0.5,-0.5,0,0.5,-0.5,0,0.5,0.5,0,-0.5,0.5,0};
const GLuint F[] ={0,1,2, 0,2,3};
int main(int argc, char * argv[])
{
// Init glut and create window + OpenGL context
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_3_2_CORE_PROFILE|GLUT_RGBA|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(w,h);
glutCreateWindow("test");
// Compile shaders
const auto & compile = [](const char * src,const GLenum type)->GLuint
{
GLuint s = glCreateShader(type);
glShaderSource(s, 1, &src, NULL);
glCompileShader(s);
return s;
};
tex_p_id = glCreateProgram();
glAttachShader(tex_p_id,compile(tex_v_shader.c_str(), GL_VERTEX_SHADER));
glAttachShader(tex_p_id,compile(tex_f_shader.c_str(), GL_FRAGMENT_SHADER));
glLinkProgram(tex_p_id);
glutDisplayFunc(
[]()
{
glViewport(0,0,w,h);
glUseProgram(tex_p_id);
glClearColor(0.0,0.4,0.7,0.);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
const bool use_vao = true;
GLuint VAO;
if(use_vao)
{
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
}
GLuint VBO, EBO;
glGenBuffers(1, &VBO);
glGenBuffers(1, &EBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(V), V, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(F), F, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
if(use_vao)
{
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
assert(glGetError() == GL_NO_ERROR);
glDrawElements(GL_TRIANGLES,sizeof(F)/sizeof(GLuint),GL_UNSIGNED_INT, 0);
assert(glGetError() == GL_NO_ERROR);
glutSwapBuffers();
}
);
glutReshapeFunc( [](int w,int h){::h=h, ::w=w;});
glutMainLoop();
}
On my machine glGetString(GL_VERSION) produces 4.1 INTEL-10.6.20.
Using VAOs is required in the core profile. From the OpenGL 3.3 spec, page 342, in the section E.2.2 "Removed Features":
The default vertex array object (the name zero) is also deprecated.
This means that you can't set up vertex attributes without creating and binding your own VAO.
No with a core 3.3+ profile you need a VAO to render.
You can however just create and bind a VAO and forget about it (keep it bound).
Besides that glEnableVertexAttribArray(0); must still be called even when using compatibility profile and not using a VAO.
A few other remarks is that you regenerate all buffers and VAOs every frame but don't clean it up. You should do that once during initialization and then rebind when drawing:
if(!use_vao){
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
}
else
{
glBindVertexArray(VAO);
}
So I wanted to implement a simple VBO to see if it was worth switching from display lists for static objects in my scene. So far, I don't think Im doing it anytime soon. So heres my problem: I can render vertices fine, but as soon as I throw in texture coordinates, it crashes. So I do a bit of experimenting and its because I'm binding a texture to the thing. I have NEVER heard of this as being a problem. Literally the exact same code, using display lists works. Here is my code:
//create a vertex buffer object for the particles to use
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
//create the data, this is really sloppy
float QuadVertextData[] = {0,0,0,1,0,0,1,1,0,0,1,0};
float QuadTextureData[] = {0,1,1,1,1,0,0,0};
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*4*3, &QuadVertextData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//generate another buffer for the texcoords
glGenBuffers(1, &VBOT);
glBindBuffer(GL_ARRAY_BUFFER, VBOT);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*4*2, &QuadTextureData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
and rendering:
glUseProgram(shader);
glBindTexture(GL_TEXTURE_2D, texture);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexPointer(3, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, VBOT);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I really have no idea why this is happening, Doing a texture without a shader crashes it, binding a shader without a texture crashes it. Any help or advice?
In a core opengl profile, you need to use VAOs in order to work with calls such as glDrawArrays and glDrawElements. A VAO (vertex array object) is a list of binding points for VBOs that encapsulates information such as data format and number of components for each VBO that is bound to it. In code it looks something like this.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*4*3, &QuadVertextData, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glGenBuffers(1, &vbot);
glBindBuffer(GL_ARRAY_BUFFER, vbot);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*4*2, &QuadTextureData, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glBindVertexArray(0);
The 0 and 1 (the first argument to glVertexAttribPointer) specifies the generic vertex attribute index in your shader. The 2 and 3 (the second argument) specify the number of components. Your input should be defined like this in the shader in order to work with this VAO:
layout(location=0) in vec3 position;
layout(location=1) in vec2 textureCoordinates;
Before drawing anything, you just have to use the shader program, bind your texture and bind the VAO:
glUseProgram(shader);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glBindVertexArray(vao);
glDrawArrays(GL_QUADS, 0, 4);
Don't forget to set the texture location of the sampler2D uniform in the shader.
If you want to learn more about switching to the OpenGL core profile, I recommand the tutorials from http://www.opengl-tutorial.org/