I have created a deferred renderer using OpenGL that seems to be working great for exactly one frame. Then it renders just blackness. For the code below I have flattened the architecture of the render quite a lot, but I think everything relevant is included. If more context is needed you can look here.
This first piece is run at program initialization:
// corresponds to deferredRenderer.Bind();
glViewport(0, 0, display.GetWidth(), display.GetHeight());
glClearColor(0, 0, 0, 1);
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glDepthFunc(GL_LEQUAL);
Then the loop begins. First the renderer is bound for object/material pass:
// corresponds to deferredRenderer.BindForObjectPass();
gBuffer.BindAsDrawFrameBuffer();
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Then this code is then run for every object (each has its own shader):
materialShader.Bind();
diffuseTexture->Bind(0);
shader.SetUniform("u_diffuse", 0);
glm::mat4 modelViewMatrix = camera.GetViewMatrix() * transform.GetModelMatrix();
glm::mat4 projectionMatrix = camera.GetProjectionMatrix();
materialShader.SetUniform("u_model_view_matrix", modelViewMatrix);
materialShader.SetUniform("u_projection_matrix", projectionMatrix);
glBindVertexArray(vertexArray);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
After all objects have been renderered, the light pass begins. At this stage in development it's just one shader with a hardcoded light:
// corresponds to deferredRenderer.RenderLightPass();
display.BindAsDrawFrameBuffer();
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT);
screenSpaceShader.Bind();
gBuffer.GetAlbedoTexture().Bind(10);
screenSpaceShader.SetUniform("u_albedo", 10);
gBuffer.GetNormalTexture().Bind(11);
screenSpaceShader.SetUniform("u_normals", 11);
glBindVertexArray(vertexArray);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 8);
glBindVertexArray(0);
And finally the backbuffer is switched to the front:
SDL_GL_SwapWindow(window);
After this the renderer is bound for object pass again and continues to loop.
Note that the first frame renders exactly as it should, so I think it's safe to assume it's at least somewhat correct. The fact that it changes after one full frame also tells me that it probably has something to do with the gl state being in a strange state after the first loop around. I have also made sure that the gBuffer renderbuffer is complete, so that shouldn't be a/the problem
I fixed it by using some debugging tools that told me what gl-calls that caused errors.
The issue was that a shader accidentally was destroyed as soon as it was constructed since I was initializing it on the stack in a initializer list. All of the code in the question should be working correctly though.
Related
I have two planar shadows of the same object coming from the same light source - one that casts on the floor and one to cast on the wall when the object is close enough. Everything works just fine as far as the shadows being cast, I'm using the stencil buffer to make sure that the two shadows only cast on their respective surfaces without being rendered outside of the room.
The problem is that the two stencil buffers bleed into each other, specifically whichever shadow I render second bleeds into the stencil buffer for the first one. I figure it's some issue with the stencil function or something, using the wrong parameters, but I can't seem to figure it out.
// Generate the shadow using a shadow matrix (created using light position and vertices of
// the quad on which the shadow will be projected) and the object I'm making a shadow of
void createShadow(float shadowMat[16])
{
glDisable(GL_DEPTH_TEST);
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE_2D);
// Set the shadow color
glColor3f(0.1, 0.1, 0.1);
glPushMatrix();
// Create the shadow using the matrix and the object casting a shadow
glMultMatrixf((GLfloat*)shadowMat);
translate, rotate etc;
render object;
glPopMatrix();
// Reset values to render the rest of the scene
glColor3f(1.0, 1.0, 1.0);
glEnable(GL_DEPTH_TEST);
glEnable(GL_LIGHTING);
glEnable(GL_TEXTURE_2D);
}
// Set up the stencil buffer and render the shadow to it
void renderShadow(float shadowMat[16], float shadowQuad[12])
{
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glDisable(GL_DEPTH_TEST);
// Create a stencil for the shadow, using the vertices of the plane on which it will
// be projected
glPushMatrix();
translate, rotate etc;
glEnableClientState(GL_VERTEX_ARRAY);
// The shadow quad is the same vertices that I use to make the shadow matrix
glVertexPointer(3, GL_FLOAT, 0, shadowQuad);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glStencilFunc(GL_EQUAL, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
// Render the shadow to the plane
createShadow(shadowMat);
glDisable(GL_STENCIL_TEST);
}
// In the render function:
Render floor/surrounding area;
Set up light using the same position used to make the shadow matrix;
renderShadow(wallShadowMatrix, wallVertices);
renderShadow(floorShadowMatrix, floorVertices);
Render rest of scene;
If I render the shadows on their own they work as intended, but when I render them together, whichever one rendered second shows up in the stencil of the first shadow.
I've included a few pictures; the first two show the individual Shadow on the wall and Shadow on the floor, and here is the floor shadow rendered after the wall shadow, and vice versa.
Fixed it, I needed to add the following code between the two renderShadow calls in the render function:
glClear(GL_STENCIL_BUFFER_BIT);
I am trying to write a fluid simulator that requires solving iteratively some differential equations (Lattice-Boltzmann Method). I want it to be a real-time graphical visualisation using OpenGL. I ran into a problem. I use a shader to perform relevant calculations on GPU. What I what is to pass the texture describing the state of the system at time t into the shader, shader performs the calculation and returns the state of the system at time t+dt, I render the texture on a quad and then pass the texture back into the shader. However, I found that I can not read and write to the same texture at the same time. But I am sure I have seen implementations of such calculations on GPU. How do they work around it? I think I saw a few discussion on a different way of working around the fact that OpenGL can read and write the same texture, but I could not quite understand them and adapt them to my case. To render to texture I use: glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);
Here is my rendering routine:
do{
//count frames
frame_counter++;
// Render to our framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glViewport(0,0,windowWidth,windowHeight); // Render on the whole framebuffer, complete from the lower left corner to the upper right
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderTexture);
glUniform1i(TextureID, 0);
printf("Inv Width: %f", (float)1.0/windowWidth);
//Pass inverse widths (put outside of the cycle in future)
glUniform1f(invWidthID, (float)1.0/windowWidth);
glUniform1f(invHeightID, (float)1.0/windowHeight);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
// Render to the screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Render on the whole framebuffer, complete from the lower left corner to the upper right
glViewport(0,0,windowWidth,windowHeight);
// Clear the screen
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(quad_programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
// Set our "renderedTexture" sampler to user Texture Unit 0
glUniform1i(texID, 0);
glUniform1f(timeID, (float)(glfwGetTime()*10.0f) );
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
glReadBuffer(GL_BACK);
glBindTexture(GL_TEXTURE_2D, sourceTexture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, windowWidth, windowHeight, 0);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
}
What happens now, is that when I render to the framebuffer, I the texture I get as an input is empty, I think. But when I render the same texture on screen, it renders succesfully what I excpect.
Okay, I think I've managed to figure something out. Instead of rendering to a framebuffer what I can do is to use glCopyTexImage2D to copy whatever got rendered on the screen to a texture. Now, however, I have another issue: I can't understand if glCopyTexImage2D will work with a frame buffer. It works with onscreen rendering, but I am failing to get it to work when I am rendering to a framebuffer. Not sure if this is even possible in the first place. Made a separate question on this:
Does glCopyTexImage2D work when rendering offscreen?
I take multiple images of the same mesh using OpenGL, GLEW and GLFW. The mesh (triangles) doesn't change in each shot, only the ModelViewMatrix does.
Here's the important code of my mainloop:
for (int i = 0; i < number_of_images; i++) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
/* set GL_MODELVIEW matrix depending on i */
glBegin(GL_TRIANGLES);
for (Triangle &t : mesh) {
for (Point &p : t) {
glVertex3f(p.x, p.y, p.z);
}
}
glReadPixels(/*...*/) // get picture and store it somewhere
glfwSwapBuffers();
}
As you can see, I set/transfer the triangle vertices for each shot I want to take. Is there a solution in which I only need to transfer them once? My mesh is quite large, so this transfer takes quite some time.
In the year 2016 you must not use glBegin/glEnd. No way. Use Vertex Array Obejcts instead; and use custom vertex and/or geometry shaders to reposition and modify your vertex data. Using these techniques, you will upload your data to the GPU once, and then you'll be able to draw the same mesh with various transformations.
Here is an outline of how your code may look like:
// 1. Initialization.
// Object handles:
GLuint vao;
GLuint verticesVbo;
// Generate and bind vertex array object.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Generate a buffer object.
glGenBuffers(1, &verticesVbo);
// Enable vertex attribute number 0, which
// corresponds to vertex coordinates in older OpenGL versions.
const GLuint ATTRIBINDEX_VERTEX = 0;
glEnableVertexAttribArray(ATTRIBINDEX_VERTEX);
// Bind buffer object.
glBindBuffer(GL_ARRAY_BUFFER, verticesVbo);
// Mesh geometry. In your actual code you probably will generate
// or load these data instead of hard-coding.
// This is an example of a single triangle.
GLfloat vertices[] = {
0.0f, 0.0f, -9.0f,
0.0f, 0.1f, -9.0f,
1.0f, 1.0f, -9.0f
};
// Determine vertex data format.
glVertexAttribPointer(ATTRIBINDEX_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Pass actual data to the GPU.
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*3*3, vertices, GL_STATIC_DRAW);
// Initialization complete - unbinding objects.
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// 2. Draw calls.
while(/* draw calls are needed */) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(vao);
// Set transformation matrix and/or other
// transformation parameters here using glUniform* calls.
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0); // Unbinding just as an example in case if some other code will bind something else later.
}
And a vertex shader may look like this:
layout(location=0) in vec3 vertex_pos;
uniform mat4 viewProjectionMatrix; // Assuming you set this before glDrawArrays.
void main(void) {
gl_Position = viewProjectionMatrix * vec4(vertex_pos, 1.0f);
}
Also take a look at this page for a good modern accelerated graphics book.
#BDL already commented that you should abandon the immediate mode drawing calls (glBegin … glEnd) and switch to Vertex Array drawing (glDrawElements, glDrawArrays) that fetch their data from Vertex Buffer Objects (VBOs). #Sergey mentioned Vertex Array Objects in his answer, but those are actually state containers for VBOs.
A very important thing you have to understand – and the way you asked your question it's apparently something you're not aware of, yet – is, that OpenGL does not deal with "meshes", "scenes" or the like. OpenGL is just a drawing API. It draws points… lines… and triangles… one at a time… with no connection between them whatsoever. That's it. So when you show multiple views of the "same" thing, you must draw it several times. There's no way around this.
Most recent versions of OpenGL support multiple viewport rendering, but it still takes a geometry shader to multiply the geometry into several pieces to be drawn.
I have an OpenGL 1.1 ES 2D sprite engine that's based on one GL_TRIANGLE_FAN per sprite. The main rendering code that gets called per-sprite, per-frame is as follows:
void drawTexture(BitmapImage* aImage, short* vertices, float* texCoords,
ColorMap &colorMap, TInt xDest, TInt yDest, TInt aAlpha)
{
glPushMatrix();
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D, textureId);
glVertexPointer(3, GL_SHORT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glColorPointer(RGBA_BYTES, GL_UNSIGNED_BYTE, 0, colorMap.GetMap());
TFloat scaleX, scaleY;
aImage->getScale(scaleX, scaleY);
glTranslatef((float)xDest, (float)yDest, 0.0f);
glScalef(scaleX, scaleY, 1.0f);
glRotatef(aImage->getRotAngle(), 0.0f, 0.0f, 1.0f);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glPopMatrix();
}
I've been told that switching to Vertex Buffer Objects (VBOs) will significantly increase the performance of rendering, so I'd like to do that. My research thus far has lead me to several examples showing how to set up individual vertex, color, and texture offset buffers, but good examples of how to interleave this data have been more elusive.
For example, I'm pretty sure this is how I'd set up to render with my vertex data in a VBO:
glGenBuffers(1, &batchBufferHandle);
glBindBuffer(GL_ARRAY_BUFFER, batchBufferHandle);
glBufferData(GL_ARRAY_BUFFER, dataSize, data, GL_STATIC_DRAW);
glVertextPointer(3, GL_SHORT, 0, 0);
glDrawElements(..., 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDeleteBuffers(1, &batchBufferHandle);
Apparently I'd generate and bind similar buffers for texture coordinates and vertex color data, though I'm not 100% clear on how setting those up would differ.
My understanding is that the speed boost would come from rendering a bunch of these triangle fans in one "draw call", but what is a "draw call" in this context? DrawElements() gets called multiple times using this methodology, so that can't be it...?
Whatever the case, it would mean that I'd have to generate a VBO (or three) that contain all the data in series for a bunch of sprites. That can be difficult enough on its own given the legacy code I'm dealing with, but I also need to translate, scale, and rotate each individual sprite. Where does that data go in the VBO(s)?
My conclusion thus far is using VBOs is only helpful in the case of a SINGLE, but complex object. It would appear what I want to do is not possible -- provide OpenGL with a list of sprites to render (including all vertex, color, texture map, scale, rotation, and translation information for each).
Is my assessment correct or is there a way to do this (using OpenGL ES 1.1)?
I'm working on a little example, where I have loaded an object from a wavefront file - and am trying to get my picking right, I've gone over this and a few tutorials about 10 times... but must be missing something. Was wondering if anyone could provide an extra set of eyes.
I've used a saved list to draw the object, which appears fine on the screen... At the moment, when gl_select(x, y) runs, I get a hit no matter what, and if I enable the translate/rotate code (which is currently commented out) - I get no hits what-so-ever.
Relevant code blocks:
// gl_select, is called when the mouse is clicked, with its x and y coords
void gl_select(int x, int y)
{
GLuint buff[256];
GLint hits;
GLint view[4];
//Buffer to store selection data
glSelectBuffer(256, buff);
//Viewport information
glGetIntegerv(GL_VIEWPORT, view);
//Switch to select mode
glRenderMode(GL_SELECT);
//Clear the name stack!
glInitNames();
//Fill the stack with one element
glPushName(0);
//Restric viewing volume
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
//Restrict draw area
gluPickMatrix(x, y, 1.0, 1.0, view);
gluPerspective(60, 1, 0.0001, 1000.0);
//Draw the objects onto the screen
glMatrixMode(GL_MODELVIEW);
//Draw only the names in the stack
glutSwapBuffers();
DrawSavedObject();
//Back into projection mode to push the matrix
glMatrixMode(GL_PROJECTION);
glPopMatrix();
hits = glRenderMode(GL_RENDER);
cout << hits;
//Back to modelview mode
glMatrixMode(GL_MODELVIEW);
}
And the draw functions:
void DrawSavedObject()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(1.0,0.0,0.0);
//translate and rotate
//glRotated(rotation,0.0,0.0,1.0);
//glTranslated(7.0, 7.0, 0.0);
//Draw the saved object
glLoadName(7);
glCallList(list_object);
glutSwapBuffers();
}
And where the list is saved:
void SaveDisplayList(){
glNewList(list_object, GL_COMPILE);
glVertexPointer(3, GL_DOUBLE, 3*sizeof(GLdouble), vertices);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawElements(GL_TRIANGLES, verticesSize ,GL_UNSIGNED_INT, triangles);
glDisableClientState(GL_VERTEX_ARRAY);
glEndList();
}
Sorry again for the chunkiness of the code blocks.
A few things to consider here:
OpenGL selection mode is deprecated and never was HW accelerated, except on a few SGI boxes and 3DLabs GPUs.
DisplayLists don't mix with Vertex Arrays.
Why do you call glutSwapBuffers right before drawing your list of saved objects? Makes absolutely no sense at all.
I'm not sure if it's relevant but you're not supposed to store things like glVertexPointer in display lists. From the spec http://www.opengl.org/sdk/docs/man/xhtml/glNewList.xml:
Certain commands are not compiled into the display list but are
executed immediately, regardless of the display-list mode. These
commands are glAreTexturesResident, glColorPointer, glDeleteLists,
glDeleteTextures, glDisableClientState, glEdgeFlagPointer,
glEnableClientState, glFeedbackBuffer, glFinish, glFlush, glGenLists,
glGenTextures, glIndexPointer, glInterleavedArrays, glIsEnabled,
glIsList, glIsTexture, glNormalPointer, glPopClientAttrib,
glPixelStore, glPushClientAttrib, glReadPixels, glRenderMode,
glSelectBuffer, glTexCoordPointer, glVertexPointer, and all of the
glGet commands.
This could be what's causing your problem.