VBO: Array not drawn - c++

I'm following this guide and I'm trying to draw a quad to the screen. I also saw the source code, it's the same and it should work, but in my case nothing is displayed on the screen. I'm using OpenGL 2.0 with a vertex shader that just sets the color to be red in a way that the quad should be visible on the screen.
Before callig glutMainLoop I generate the vertex buffer object:
#include <GL/glut.h>
#include <GL/glew.h>
vector<GLfloat> quad;
GLuint buffer;
void init()
{
// This routine gets called before glutMainLoop(), I omitted all the code
// that has to do with shaders, since it's correct.
glewInit();
quad= vector<GLfloat>{-1,-1,0, 1,-1,0, 1,1,0, -1,1,0};
glGenBuffers(1,&buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*12,quad.data(),GL_STATIC_DRAW);
}
This is my rendering routine:
void display()
{
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,buffer);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0);
// I also tried passing quad.data() as last argument, but nothing to do.
glDrawArrays(GL_QUADS,0,12);
glDisableVertexAttribArray(0);
glutSwapBuffers();
}
The problem is that nothing is drawn to the screen, I just see a black window. The quad should be red because I set the red color in the vertex shader.

So maybe the problem is the count in the glDrawArrays(GL_QUADS, 0, 12); which must be glDrawArrays(GL_QUADS, 0, 4);

I was missing glEnableClientState like this:
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_QUADS,0,12);
glDisableClientState(GL_VERTEX_ARRAY);

Related

Writing and reading from the same texture for an iterative DE solver on OpenGL

I am trying to write a fluid simulator that requires solving iteratively some differential equations (Lattice-Boltzmann Method). I want it to be a real-time graphical visualisation using OpenGL. I ran into a problem. I use a shader to perform relevant calculations on GPU. What I what is to pass the texture describing the state of the system at time t into the shader, shader performs the calculation and returns the state of the system at time t+dt, I render the texture on a quad and then pass the texture back into the shader. However, I found that I can not read and write to the same texture at the same time. But I am sure I have seen implementations of such calculations on GPU. How do they work around it? I think I saw a few discussion on a different way of working around the fact that OpenGL can read and write the same texture, but I could not quite understand them and adapt them to my case. To render to texture I use: glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);
Here is my rendering routine:
do{
//count frames
frame_counter++;
// Render to our framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glViewport(0,0,windowWidth,windowHeight); // Render on the whole framebuffer, complete from the lower left corner to the upper right
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderTexture);
glUniform1i(TextureID, 0);
printf("Inv Width: %f", (float)1.0/windowWidth);
//Pass inverse widths (put outside of the cycle in future)
glUniform1f(invWidthID, (float)1.0/windowWidth);
glUniform1f(invHeightID, (float)1.0/windowHeight);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
// Render to the screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Render on the whole framebuffer, complete from the lower left corner to the upper right
glViewport(0,0,windowWidth,windowHeight);
// Clear the screen
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(quad_programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
// Set our "renderedTexture" sampler to user Texture Unit 0
glUniform1i(texID, 0);
glUniform1f(timeID, (float)(glfwGetTime()*10.0f) );
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
glReadBuffer(GL_BACK);
glBindTexture(GL_TEXTURE_2D, sourceTexture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, windowWidth, windowHeight, 0);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
}
What happens now, is that when I render to the framebuffer, I the texture I get as an input is empty, I think. But when I render the same texture on screen, it renders succesfully what I excpect.
Okay, I think I've managed to figure something out. Instead of rendering to a framebuffer what I can do is to use glCopyTexImage2D to copy whatever got rendered on the screen to a texture. Now, however, I have another issue: I can't understand if glCopyTexImage2D will work with a frame buffer. It works with onscreen rendering, but I am failing to get it to work when I am rendering to a framebuffer. Not sure if this is even possible in the first place. Made a separate question on this:
Does glCopyTexImage2D work when rendering offscreen?

Multiple images of same mesh without duplicate triangle transfers

I take multiple images of the same mesh using OpenGL, GLEW and GLFW. The mesh (triangles) doesn't change in each shot, only the ModelViewMatrix does.
Here's the important code of my mainloop:
for (int i = 0; i < number_of_images; i++) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
/* set GL_MODELVIEW matrix depending on i */
glBegin(GL_TRIANGLES);
for (Triangle &t : mesh) {
for (Point &p : t) {
glVertex3f(p.x, p.y, p.z);
}
}
glReadPixels(/*...*/) // get picture and store it somewhere
glfwSwapBuffers();
}
As you can see, I set/transfer the triangle vertices for each shot I want to take. Is there a solution in which I only need to transfer them once? My mesh is quite large, so this transfer takes quite some time.
In the year 2016 you must not use glBegin/glEnd. No way. Use Vertex Array Obejcts instead; and use custom vertex and/or geometry shaders to reposition and modify your vertex data. Using these techniques, you will upload your data to the GPU once, and then you'll be able to draw the same mesh with various transformations.
Here is an outline of how your code may look like:
// 1. Initialization.
// Object handles:
GLuint vao;
GLuint verticesVbo;
// Generate and bind vertex array object.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Generate a buffer object.
glGenBuffers(1, &verticesVbo);
// Enable vertex attribute number 0, which
// corresponds to vertex coordinates in older OpenGL versions.
const GLuint ATTRIBINDEX_VERTEX = 0;
glEnableVertexAttribArray(ATTRIBINDEX_VERTEX);
// Bind buffer object.
glBindBuffer(GL_ARRAY_BUFFER, verticesVbo);
// Mesh geometry. In your actual code you probably will generate
// or load these data instead of hard-coding.
// This is an example of a single triangle.
GLfloat vertices[] = {
0.0f, 0.0f, -9.0f,
0.0f, 0.1f, -9.0f,
1.0f, 1.0f, -9.0f
};
// Determine vertex data format.
glVertexAttribPointer(ATTRIBINDEX_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Pass actual data to the GPU.
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*3*3, vertices, GL_STATIC_DRAW);
// Initialization complete - unbinding objects.
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// 2. Draw calls.
while(/* draw calls are needed */) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(vao);
// Set transformation matrix and/or other
// transformation parameters here using glUniform* calls.
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0); // Unbinding just as an example in case if some other code will bind something else later.
}
And a vertex shader may look like this:
layout(location=0) in vec3 vertex_pos;
uniform mat4 viewProjectionMatrix; // Assuming you set this before glDrawArrays.
void main(void) {
gl_Position = viewProjectionMatrix * vec4(vertex_pos, 1.0f);
}
Also take a look at this page for a good modern accelerated graphics book.
#BDL already commented that you should abandon the immediate mode drawing calls (glBegin … glEnd) and switch to Vertex Array drawing (glDrawElements, glDrawArrays) that fetch their data from Vertex Buffer Objects (VBOs). #Sergey mentioned Vertex Array Objects in his answer, but those are actually state containers for VBOs.
A very important thing you have to understand – and the way you asked your question it's apparently something you're not aware of, yet – is, that OpenGL does not deal with "meshes", "scenes" or the like. OpenGL is just a drawing API. It draws points… lines… and triangles… one at a time… with no connection between them whatsoever. That's it. So when you show multiple views of the "same" thing, you must draw it several times. There's no way around this.
Most recent versions of OpenGL support multiple viewport rendering, but it still takes a geometry shader to multiply the geometry into several pieces to be drawn.

Can't draw things with EBOs

In a C++ application I am writing I am trying to draw a quad using an EBO (element buffer object). Whenever I try to I can't get that quad to draw at all. What am I doing wrong?
code:
//vertices and indices
GLfloat vertices[]={
//position texture coordinate
-0.005f,0.02f,0.0f, 0.0f,1.0f,
0.02f,0.02f,0.0f, 1.0f,1.0f,
0.02f,-0.02f,0.0f, 1.0f,0.0f,
-0.005f,-0.02f,0.0f, 0.0f,0.0f,
};
GLfloat indices[]={
0,1,3,
2,3,1
};
//initialization
glCreateVertexArrays(1,&VAO);
glBindVertexArray(VAO);
glCreateBuffers(1,&VBO);
glCreateBuffers(1,&EBO);
glBindBuffer(GL_ARRAY_BUFFER,VBO);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices),vertices,GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(indices),indices,GL_STATIC_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,5*sizeof(GLfloat),(GLvoid*)nullptr);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,5*sizeof(GLfloat),(GLvoid*)(3*sizeof(GLfloat)));
glEnableVertexAttribArray(1);
glBindVertexArray(0);
//drawing commands
transformLocation=glGetUniformLocation(textureProgram,"transform");
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,woodTexture);
glUseProgram(textureProgram);
glUniformMatrix4fv(transformLocation,1,GL_FALSE,glm::value_ptr(transform));
glBindVertexArray(bowHandleVAO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,bowHandleEBO);
glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_INT,nullptr);
This works with the glDrawArrays equivalent to this, but whenever I try to Use EBOs it won't draw anything. Comment if you need more information.
The most immediate error that I can see is a type mismatch between your indices definitions and usage at calling glDrawElements
Suggestion: Change GLFloat to GLuint, i.e., define your indices as:
GLuint indices[]={ //...
In addition to what Amadeus says about changing your indices array from float to GLuint, you seem to be using the wrong VAO and EBO. In the code you show us you buffer all your data into a buffer object in VAO and indices to EBO, but then when you try to draw you're drawing with bowHandleVAO and bowHandleEBO.

OpenGL only renders the first frame then blackness

I have created a deferred renderer using OpenGL that seems to be working great for exactly one frame. Then it renders just blackness. For the code below I have flattened the architecture of the render quite a lot, but I think everything relevant is included. If more context is needed you can look here.
This first piece is run at program initialization:
// corresponds to deferredRenderer.Bind();
glViewport(0, 0, display.GetWidth(), display.GetHeight());
glClearColor(0, 0, 0, 1);
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glDepthFunc(GL_LEQUAL);
Then the loop begins. First the renderer is bound for object/material pass:
// corresponds to deferredRenderer.BindForObjectPass();
gBuffer.BindAsDrawFrameBuffer();
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Then this code is then run for every object (each has its own shader):
materialShader.Bind();
diffuseTexture->Bind(0);
shader.SetUniform("u_diffuse", 0);
glm::mat4 modelViewMatrix = camera.GetViewMatrix() * transform.GetModelMatrix();
glm::mat4 projectionMatrix = camera.GetProjectionMatrix();
materialShader.SetUniform("u_model_view_matrix", modelViewMatrix);
materialShader.SetUniform("u_projection_matrix", projectionMatrix);
glBindVertexArray(vertexArray);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
After all objects have been renderered, the light pass begins. At this stage in development it's just one shader with a hardcoded light:
// corresponds to deferredRenderer.RenderLightPass();
display.BindAsDrawFrameBuffer();
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT);
screenSpaceShader.Bind();
gBuffer.GetAlbedoTexture().Bind(10);
screenSpaceShader.SetUniform("u_albedo", 10);
gBuffer.GetNormalTexture().Bind(11);
screenSpaceShader.SetUniform("u_normals", 11);
glBindVertexArray(vertexArray);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 8);
glBindVertexArray(0);
And finally the backbuffer is switched to the front:
SDL_GL_SwapWindow(window);
After this the renderer is bound for object pass again and continues to loop.
Note that the first frame renders exactly as it should, so I think it's safe to assume it's at least somewhat correct. The fact that it changes after one full frame also tells me that it probably has something to do with the gl state being in a strange state after the first loop around. I have also made sure that the gBuffer renderbuffer is complete, so that shouldn't be a/the problem
I fixed it by using some debugging tools that told me what gl-calls that caused errors.
The issue was that a shader accidentally was destroyed as soon as it was constructed since I was initializing it on the stack in a initializer list. All of the code in the question should be working correctly though.

How not to overwrite vertex colors using shaders in OpenGL?

For the past three hours I am trying to figure out how to draw two different triangles with different colours using shaders in OpenGL and still cannot figure it out. Here is my code:
void setShaders(void)
{
vshader = loadShader("test.vert", GL_VERTEX_SHADER_ARB);
fshader = loadShader("test.frag", GL_FRAGMENT_SHADER_ARB);
vshader2 = loadShader("test2.vert", GL_VERTEX_SHADER_ARB);
fshader2 = loadShader("test2.frag", GL_FRAGMENT_SHADER_ARB);
shaderProg = glCreateProgramObjectARB();
glAttachObjectARB(shaderProg, vshader);
glAttachObjectARB(shaderProg, fshader);
glLinkProgramARB(shaderProg);
shaderProg2 = glCreateProgramObjectARB();
glAttachObjectARB(shaderProg2, vshader2);
glAttachObjectARB(shaderProg2, fshader2);
glLinkProgramARB(shaderProg2);
}
void makeBuffers(void)
{
// smaller orange triangle
glGenBuffers (1, &vbo);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glBufferData (GL_ARRAY_BUFFER, sizeof(points), points, GL_STATIC_DRAW);
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
// larger purple triangle
glGenBuffers (1, &vbo2);
glBindBuffer (GL_ARRAY_BUFFER, vbo2);
glBufferData (GL_ARRAY_BUFFER, sizeof(points2), points2, GL_STATIC_DRAW);
glGenVertexArrays (1, &vao2);
glBindVertexArray (vao2);
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo2);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
}
void window::displayCallback(void)
{
Matrix4 m4; // MT = UT * SpinMatrix
m4 = cube.getMatrix(); // make copy of the cube main matrix
cube.get_spin().mult(m4); // mult
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear color and depth buffers
glMatrixMode(GL_MODELVIEW);
glLoadMatrixd(cube.get_spin().getPointer()); // pass the pointer to new MT matrix
// draw smaller orange triangle
glUseProgramObjectARB(shaderProg);
glBindVertexArray(vao);
glDrawArrays (GL_TRIANGLES, 0, 3);
glDeleteObjectARB(shaderProg);
// draw the larger purple triangle
glUseProgramObjectARB(shaderProg2);
glBindVertexArray(vao2);
glDrawArrays (GL_TRIANGLES, 0, 3);
glDeleteObjectARB(shaderProg2);
glFlush();
glutSwapBuffers();
}
shaders:
test.vert and test2.vert are the same and are:
#version 120
//varying vec3 vp;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
test.frag:
#version 120
void main()
{
gl_FragColor = vec4(1.0, 0.5, 0.0, 1.0);
}
test2.frag:
#version 120
void main()
{
gl_FragColor = vec4(0.5, 0.0, 0.5, 1.0);
}
But what I get is two triangles that are coloured purple. What am I doing wrong that causes my smaller orange triangle is getting rewritten in purple colour?
You are deleting the shader programs after you use them in the displayCallback() method:
...
glDrawArrays (GL_TRIANGLES, 0, 3);
glDeleteObjectARB(shaderProg);
...
glDrawArrays (GL_TRIANGLES, 0, 3);
glDeleteObjectARB(shaderProg2);
If drawCallback() is called more than once, which you certainly need to expect since a window will often have to be redrawn multiple times, the shaders will be gone after the first time. In fact, the second one will not be immediately deleted because it is the currently active program. Which explains why it continues to be used for both triangles.
Shader programs are only deleted after glDelete*() is called on them, and they are not referenced as the active program. So on your first glDelete*() call for shaderProg, that program is deleted once you make shaderProg2 active, because shaderProg is then not active anymore, which releases its last reference.
You should not delete the shader programs until shutdown, or until you don't plan to use them anymore for rendering because e.g. you're creating new prgrams. So in your case, you can delete them when the application exits. At least that's often considered good style, even though it's not technically necessary. OpenGL resources will be cleaned up automatically when an application exits, similar to regular memory allocations.
BTW, if you are using at least OpenGL 2.0, all the calls for using shaders and programs are core functionality. There's no need to use the ARB version calls.