QOpenGLWidget and glReadPixels and depth buffer - c++

As part of a 3D mesh viewer I am making in QT with QOpenGLWidget, I need to provide the ability for a user to click a node within the model. To restrict selection to visible nodes only, I have tried to include glReadPixels (GL_DEPTH_COMPONENT) in my selection algorithm.
My problem is that glReadPixels(depth) always returns 0. All the error outputs in the code below return 0 as well. glReadPixels(red) returns correct values:
GLenum err = GL_NO_ERROR;
QTextStream(stdout) << "error before reading gl_red = " << err << endl;
GLfloat winX, winY, myred, mydepth;
winX = mousex;
winY = this->height() - mousey;
glReadPixels(winX,winY,1,1,GL_RED,GL_FLOAT, &myred);
QTextStream(stdout) << "GL RED = " << myred << endl;
err = glGetError();
QTextStream(stdout) << "error after reading gl_red = " << err << endl;
glReadPixels(winX,winY,1,1,GL_DEPTH_COMPONENT,GL_FLOAT, &mydepth);
QTextStream(stdout) << "GL_DEPTH_COMPONENT = " << mydepth << endl;
err = glGetError();
QTextStream(stdout) << "error after reading gl_depth = " << err << endl;
My normal 3D rendering is working fine, I have glEnable(GL_DEPTH_TEST) in my initializeGL() function. At the moment I'm not using any fancy VBOs or VAOs. FaceMeshQualityColor and triVertices are both datatype QVector<QVector3D>. My current face rendering follows the following progression:
shader = shaderVaryingColor;
shader->bind();
shader->setAttributeArray("color", FaceMeshQualityColor.constData());
shader->enableAttributeArray("color");
shader->setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix);
shader->setAttributeArray("vertex", triVertices.constData());
shader->enableAttributeArray("vertex");
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(1,1);
glDrawArrays(GL_TRIANGLES, 0, triVertices.size());
glDisable(GL_POLYGON_OFFSET_FILL);
shader->disableAttributeArray("vertex");
shader->disableAttributeArray("color");
shader->release();
In my main file I explicitly set my OpenGL version to something with glReadPixels(GL_DEPTH_COMPONENT) functionality (as opposed to OpenGL ES 2.0):
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QSurfaceFormat format;
format.setVersion(2, 1);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setDepthBufferSize(32);
QSurfaceFormat::setDefaultFormat(format);
MainWindow w;
w.showMaximized();
return a.exec();
}
Is my problem of glReadPixels(depth) not working somehow related to my treatment of my depth buffer?
Do I need to 'activate' the depth buffer to be able to read from it before I call glReadPixels? Or do I need to have my vertex shader explicitly write depth location to some other object?

QOpenGLWidget works into an underlying FBO and you can't simply read depth component from that FBO if multi sampling is enabled. The easiest solution is to set the samples to zero, so your code will look like this:
QSurfaceFormat format;
format.setVersion(2, 1);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setDepthBufferSize(32);
format.setSamples(0);
QSurfaceFormat::setDefaultFormat(format);
Or you can use multisampling, but an additional FBO will be required without multisampling where the depth buffer can be copied.
class MyGLWidget : public QOpenGLWidget, protected QOpenGLFunctions
{
//
// OTHER WIDGET RELATED STUFF
//
QOpenGLFramebufferObject* mFBO = nullptr;
MyGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//
// DRAW YOUR SCENE HERE!
//
QOpenGLContext *ctx = QOpenGLContext::currentContext();
// FBO must be re-created! is there a way to reset it?
if(mFBO) delete mFBO;
QOpenGLFramebufferObjectFormat format;
format.setSamples(0);
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
mFBO = new QOpenGLFramebufferObject(size(), format);
glBindFramebuffer(GL_READ_FRAMEBUFFER, ctx->defaultFramebufferObject());
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mFBO->handle());
ctx->extraFunctions()->glBlitFramebuffer(0, 0, width(), height(), 0, 0, mFBO->width(), mFBO->height(), GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT, GL_NEAREST);
mFBO->bind(); // must rebind, otherwise it won't work!
float mouseDepth = 1.f;
glReadPixels(mouseX, mouseY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &mouseDepth);
mFBO->release();
}
};

Related

OpenGL (ES) Image Processing C++

I am currently trying to do image processing using OpenGL ES. I am trying to do basic image effects like blurring switching color space and so on.
I want to build the simplest program to do the following things:
Image loading
Image processing (using shader)
Image saving
I have managed to build the following setup :
An OpenGL context
The image I want to do effects on loaded using DevIL.
Two shaders (one vertex shaders and one fragment shaders)
I am now stuck at using the image I loaded to send data to fragment shader. What I am trying to do is to send the image as a sampler2D to the fragment shader and apply treatment on it.
I have multiple questions such as:
Do I need a vertex shader if all I want to do is pure 2D image processing ?
If I do, what should be done in this vertex shader as I have no vertices at all. Should I create quad vertices (like (0,0) (1, 0) (0, 1) (1, 1)) ? If so, why ?
Do I need to use things like VBO (which seems to be related to the vertex shader), FBO or other thing like that ?
Can't I just load my image into the texture and wait for the fragment shader to do everything I want on this texture ?
Can someone provides some simple piece of "clean" code that could help me understand (without any fancy classes that makes the understanding so complicated) ?
Here is what my fragment shader looks like for simple color swapping:
uniform int width;
uniform int height;
uniform sampler2D texture;
void main() {
vec2 texcoord = vec2(gl_FragCoord.x/width, gl_FragCoord.y/height);
vec4 texture_value = texture2D(texture, texcoord);
gl_FragColor = texture_value.bgra;
}
and my main.cpp :
int main(int argc, char** argv)
{
if (argc != 4) {
std::cerr << "Usage: " << argv[0] << " <vertex shader path> <fragment shader path> <image path>" << std::endl;
return EXIT_FAILURE;
}
// Get an EGL valid display
EGLDisplay display;
display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
if (display == EGL_NO_DISPLAY) {
std::cerr << "Failed to get EGL Display" << std::endl
<< "Error: " << eglGetError() << std::endl;
return EXIT_FAILURE;
}
else {
std::cerr << "Successfully get EGL Display." << std::endl;
}
// Create a connection to the display
int minor, major;
if (eglInitialize(display, &minor, &major) == EGL_FALSE) {
std::cerr << "Failed to initialize EGL Display" << std::endl
<< "Error: " << eglGetError() << std::endl;
eglTerminate(display);
return EXIT_FAILURE;
}
else {
std::cerr << "Successfully intialized display (OpenGL ES version " << minor << "." << major << ")." << std::endl;
}
// OpenGL ES Config are used to specify things like multi sampling, channel size, stencil buffer usage, & more
// See the doc: https://www.khronos.org/registry/EGL/sdk/docs/man/html/eglChooseConfig.xhtml for more informations
EGLConfig config;
EGLint num_configs;
if (!eglChooseConfig(display, configAttribs, &config, 1, &num_configs)) {
std::cerr << "Failed to choose EGL Config" << std::endl
<< "Error: " << eglGetError() << std::endl;
eglTerminate(display);
return EXIT_FAILURE;
}
else {
std::cerr << "Successfully choose OpenGL ES Config ("<< num_configs << ")." << std::endl;
}
// Creating an OpenGL Render Surface with surface attributes defined above.
EGLSurface surface = eglCreatePbufferSurface(display, config, pbufferAttribs);
if (surface == EGL_NO_SURFACE) {
std::cerr << "Failed to create EGL Surface." << std::endl
<< "Error: " << eglGetError() << std::endl;
}
else {
std::cerr << "Successfully created OpenGL ES Surface." << std::endl;
}
eglBindAPI(EGL_OPENGL_API);
EGLContext context = eglCreateContext(display, config, EGL_NO_CONTEXT, contextAttribs);
if (context == EGL_NO_CONTEXT) {
std::cerr << "Failed to create EGL Context." << std::endl
<< "Error: " << eglGetError() << std::endl;
}
else {
std::cerr << "Successfully created OpenGL ES Context." << std::endl;
}
//Bind context to surface
eglMakeCurrent(display, surface, surface, context);
// Create viewport and check if it has been created correctly
glViewport(0, 0, WIDTH, HEIGHT);
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
if (viewport[2] != WIDTH || viewport[3] != HEIGHT) {
std::cerr << "Failed to create the viewport. Size does not match (glViewport/glGetIntegerv not working)." << std::endl
<< "OpenGL ES might be faulty!" << std::endl
<< "If you are on Raspberry Pi, you should not updated EGL as it will install fake EGL." << std::endl;
eglTerminate(display);
return EXIT_FAILURE;
}
// Clear buffer and get ready to draw some things
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create a shader program
GLuint program = load_shaders(std::string(argv[1]), std::string(argv[2]));
if (program == -1)
{
std::cerr << "Failed to create a shader program. See above for more details." << std::endl;
eglTerminate(display);
return EXIT_FAILURE;
}
/* Initialization of DevIL */
if (ilGetInteger(IL_VERSION_NUM) < IL_VERSION) {
std::cerr << "Failed to use DevIL: Wrong version." << std::endl;
return EXIT_FAILURE;
}
ilInit();
ILuint image = load_image(argv[3]);
GLuint texId;
glGenTextures(1, &texId); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, texId); /* Binding of texture name */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear interpolation for magnification filter */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear interpolation for minifying filter */
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT),
0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE, ilGetData()); /* Texture specification */
Thanks.
Do I need a vertex shader as all I want to do is pure 2D image processing ?
Using vertex and fragment shaders is mandatory in OpenGL ES 2.
If I do, what should be done in this vertex shader as I have no vertices at all. Should I create quad vertices (like (0,0) (1, 0) (0, 1) (1, 1)) ? If so, why ?
Yes. Because that's how OpenGL ES 2 works. Otherwise you would need to use something like computer shaders (supported in OpenGL ES 3.1+) or OpenCL.
Do I need to use things like VBO (which seems to be related to the vertex shader), FBO or other thing like that ?
Using VBO/IBO won't make practically any difference for you since you only have 4 vertices and 2 primitives. You may want to render to texture, depending on your needs.
Can't I just load my image into the texture and wait for the fragment shader to do everything I want on this texture ?
No.

OpenGL Depth Testing not working (GLEW/SDL2)

I am working on a simple opengl rendering engine as a project to learn C++ and OpenGL. I am following along with a youtube tutorial series that does it in java (which I know) and translating it to C++.
I'm hitting a snag trying to render a cube from an OBJ file that I read in with Assimp. It appears I haven't setup depth testing/culling correctly but can't for the life of me figure out what I am doing wrong. It appears that faces on the back of the object are not getting culled and are rendering over faces that are in front of them.
Images of cube rendering with some back faces being rendered over front faces:
I am using GLEW + SDL2 to initialize opengl and create a window.
I have made sure to set the following when initializing:
Window::Window(const int width, const int height, const std::string& title)
{
m_isClosed = false;
RenderUtil::initGraphics();
m_window = SDL_CreateWindow(title.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, SDL_WINDOW_OPENGL);
m_glContext = SDL_GL_CreateContext(m_window);
glewExperimental = GL_TRUE;
GLenum status = glewInit();
if (status != GLEW_OK) {
std::cerr << "WARNING WILL ROBINSON!" << std::endl;
std::cerr << "GLEW failed to initialize" << std::endl;
std::cerr << "GLEW Error Code: " << status << std::endl;
std::cerr << "GLEW Error Message: " << glewGetErrorString(status);
exit(1);
}
}
void RenderUtil::initGraphics() {
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glEnable(GL_DEPTH_TEST);
glEnable(GL_FRAMEBUFFER_SRGB);
}
During the program loop I make sure to clear the buffers as well
void RenderUtil::clearScreen() {
// TODO: stencil buffer
glClearColor(RU_CLEAR_R, RU_CLEAR_G, RU_CLEAR_B, RU_CLEAR_A);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
I'm really at a loss as to what would be causing this. Full Code for the project can be found at GitHub In case there is something missing from the question that I didn't know to add.
Thanks in Advance for the Help!
OpenGL state changes are only possible when a valid context is available. In your program, you are trying to enable the depth test before the context has been created.
Moving glEnable(GL_DEPTH_TEST) after SDL_GL_CreateContext should solve the problem.

Oculus 0.8 SDK Black Screen

I'm trying to make a very basic example of rendering to the Oculus using their SDK v0.8. All I'm trying to do is render a solid color to both eyes. When I run this, everything appears to initialize correctly. The Oculus shows the health warning message, but all I see is a black screen once the health warning message goes away. What am I doing wrong here?
#define GLEW_STATIC
#include <GL/glew.h>
#define OVR_OS_WIN32
#include <OVR_CAPI_GL.h>
#include <SDL.h>
#include <iostream>
int main(int argc, char *argv[])
{
SDL_Init(SDL_INIT_VIDEO);
SDL_Window* window = SDL_CreateWindow("OpenGL", 100, 100, 800, 600, SDL_WINDOW_OPENGL);
SDL_GLContext context = SDL_GL_CreateContext(window);
//Initialize GLEW
glewExperimental = GL_TRUE;
glewInit();
// Initialize Oculus context
ovrResult result = ovr_Initialize(nullptr);
if (OVR_FAILURE(result))
{
std::cout << "ERROR: Failed to initialize libOVR" << std::endl;
SDL_Quit();
return -1;
}
// Connect to the Oculus headset
ovrSession hmd;
ovrGraphicsLuid luid;
result = ovr_Create(&hmd, &luid);
if (OVR_FAILURE(result))
{
std::cout << "ERROR: Oculus Rift not detected" << std::endl;
SDL_Quit();
return 0;
}
ovrHmdDesc desc = ovr_GetHmdDesc(hmd);
std::cout << "Found " << desc.ProductName << "connected Rift device" << std::endl;
ovrSizei recommenedTex0Size = ovr_GetFovTextureSize(hmd, ovrEyeType(0), desc.DefaultEyeFov[0], 1.0f);
ovrSizei bufferSize;
bufferSize.w = recommenedTex0Size.w;
bufferSize.h = recommenedTex0Size.h;
std::cout << "Buffer Size: " << bufferSize.w << ", " << bufferSize.h << std::endl;
// Generate FBO for oculus
GLuint oculusFbo = 0;
glGenFramebuffers(1, &oculusFbo);
// Create swap texture
ovrSwapTextureSet* pTextureSet = nullptr;
if (ovr_CreateSwapTextureSetGL(hmd, GL_SRGB8_ALPHA8, bufferSize.w, bufferSize.h,&pTextureSet) == ovrSuccess)
{
ovrGLTexture* tex = (ovrGLTexture*)&pTextureSet->Textures[0];
glBindTexture(GL_TEXTURE_2D, tex->OGL.TexId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
// Create ovrLayerHeader
ovrEyeRenderDesc eyeRenderDesc[2];
eyeRenderDesc[0] = ovr_GetRenderDesc(hmd, ovrEye_Left, desc.DefaultEyeFov[0]);
eyeRenderDesc[1] = ovr_GetRenderDesc(hmd, ovrEye_Right, desc.DefaultEyeFov[1]);
ovrLayerEyeFov layer;
layer.Header.Type = ovrLayerType_EyeFov;
layer.Header.Flags = ovrLayerFlag_TextureOriginAtBottomLeft | ovrLayerFlag_HeadLocked;
layer.ColorTexture[0] = pTextureSet;
layer.ColorTexture[1] = pTextureSet;
layer.Fov[0] = eyeRenderDesc[0].Fov;
layer.Fov[1] = eyeRenderDesc[1].Fov;
ovrVector2i posVec;
posVec.x = 0;
posVec.y = 0;
ovrSizei sizeVec;
sizeVec.w = bufferSize.w;
sizeVec.h = bufferSize.h;
ovrRecti rec;
rec.Pos = posVec;
rec.Size = sizeVec;
layer.Viewport[0] = rec;
layer.Viewport[1] = rec;
ovrLayerHeader* layers = &layer.Header;
SDL_Event windowEvent;
while (true)
{
if (SDL_PollEvent(&windowEvent))
{
if (windowEvent.type == SDL_QUIT) break;
}
ovrGLTexture* tex = (ovrGLTexture*)&pTextureSet->Textures[0];
glBindFramebuffer(GL_FRAMEBUFFER, oculusFbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex->OGL.TexId, 0);
glViewport(0, 0, bufferSize.w, bufferSize.h);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
ovr_SubmitFrame(hmd, 0, nullptr, &layers, 1);
SDL_GL_SwapWindow(window);
}
SDL_GL_DeleteContext(context);
SDL_Quit();
return 0;
}
There are a number of problems here
Not initializing ovrLayerEyeFov.RenderPose
Not using ovrSwapTextureSet correctly
Useless calls to SDL_GL_SwapWindow will cause stuttering
Possible undefined behavior when reading the texture while it's still bound for drawing
Not initializing ovrLayerEyeFov.RenderPose
You main problem is that you're not setting the RenderPose member of the ovrLayerEyeFov structure. This member tells the SDK what pose you rendered at and therefore how it should apply timewarp based on the current head pose (which might have changed since you rendered). By not setting this value you're basically giving the SDK a random head pose, which is almost certainly not a valid head pose.
Additionally, ovrLayerFlag_HeadLocked isn't needed for your layer type. It causes the Oculus to display the resulting image in a fixed position relative to your head. It might do what you want, but only if you properly initialize the layer.RenderPose members with the correct values (I'm not sure what those would be in the case of ovrLayerEyeFov, as I've only used the flag in combination with ovrLayerQuad).
What you should do is add the following right after the layer declaration to properly initialize it:
memset(&layer, 0, sizeof(ovrLayerEyeFov));
Then, inside your render loop you should add the following right after the check for a quit event:
ovrTrackingState tracking = ovr_GetTrackingState(hmd, 0, true);
layer.RenderPose[0] = tracking.HeadPose.ThePose;
layer.RenderPose[1] = tracking.HeadPose.ThePose;
This tells the SDK that this image was rendered from the point of view where the head currently is.
Not using ovrSwapTextureSet correctly
Another problem in the code is that you're incorrectly using the texture set. The documentation specifies that when using the texture set, you need to use the texture pointed to by ovrSwapTextureSet.CurrentIndex:
ovrGLTexture* tex = (ovrGLTexture*)(&(pTextureSet->Textures[pTextureSet->CurrentIndex]));
...and then after each call to ovr_SubmitFrame you need to increment ovrSwapTextureSet.CurrentIndex then mod the value by ovrSwapTextureSet.TextureCount like so
pTextureSet->CurrentIndex = (pTextureSet->CurrentIndex + 1) % pTextureSet->TextureCount;
Useless calls to SDL_GL_SwapWindow will cause stuttering
The SDL_GL_SwapWindow(window); call is unnecessary and pointless since you haven't drawn anything to the default framebuffer. Once you move away from drawing a solid color, this call will end up causing judder, since it will block until v-sync (typically at 60hz) causing you to sometimes miss the refersh of the Oculus display. Right now this will be invisible because your scene is just a solid color, but later on when you're rendering objects in 3D, it will cause intolerable judder.
You can use SDL_GL_SwapWindow if you
Ensure v-sync is disabled
Have a mirror texture available to draw to the window. (See the documentation for ovr_CreateMirrorTextureGL)
Possible framebuffer issues
I'm less certain about this one being a serious problem, but I would also suggest unbinding the framebuffer and detaching the Oculus provided texture before sending it to ovr_SubmitFrame(), as I'm not certain that the behavior is well defined when reading from a texture attached to a framebuffer that is currently bound for drawing. It seems to have no impact on my local system, but undefined doesn't mean doesn't work, it just means you can't rely on it to work.
I've updated the sample code and put it here. As a bonus I've modified it so it draws one color on the left eye and a different color on the right eye, as well as setting up the buffer to provide for rendering one half of the buffer for each eye.

openGL migration from SFML to glut, vertices arrays or display lists are not displayed

Due to using quad buffered stereo 3D (which i have not included yet), i need to migrate my openGL program from a SFML window to a glut window.
With SFML my vertices and display list were properly displayed, now with glut my window is blank white (or another color depending on the way i clear it).
Here is the code to initialise the window :
int type;
int stereoMode = 0;
if ( stereoMode == 0 )
type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH;
else
type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH | GLUT_STEREO;
glutInitDisplayMode(type);
int argc = 0;
char *argv = "";
glewExperimental = GL_TRUE;
glutInit(&argc, &argv);
bool fullscreen = false;
glutInitWindowSize(width,height);
int win = glutCreateWindow(title.c_str());
glutSetWindow(win);
assert(win != 0);
if ( fullscreen ) {
glutFullScreen();
width = glutGet(GLUT_SCREEN_WIDTH);
height = glutGet(GLUT_SCREEN_HEIGHT);
}
GLenum err = glewInit();
if (GLEW_OK != err) {
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
}
glutDisplayFunc(loop_function);
This is the only code i had to change for now, but here is the code i used with sfml and displayed my objects in the loop, if i change the value of glClearColor, the window's background does change color so the opengl context seems to be working :
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(255.0f, 255.0f, 255.0f, 0.0f);
glLoadIdentity();
sf::Time elapsed_time = clock.getElapsedTime();
clock.restart();
camera->animate(elapsed_time.asMilliseconds());
camera->look();
for (auto i = objects->cbegin(); i != objects->cend(); ++i)
(*i)->draw(camera);
glutSwapBuffers();
Is there any other changes i should have done switching to glut ? that would be great if someone could enlighten me on the subject.
In addition to that, i found out that adding too many objects (that were well handled before with SFML), openGL gives error 1285: out of memory. Maybe this is related.
EDIT :
Here is the code i use to draw each object, maybe it is the problem :
GLuint LightID = glGetUniformLocation(this->shaderProgram, "LightPosition_worldspace");
if(LightID ==-1)
cout << "LightID not found ..." << endl;
GLuint MaterialAmbientID = glGetUniformLocation(this->shaderProgram, "MaterialAmbient");
if(LightID ==-1)
cout << "LightID not found ..." << endl;
GLuint MaterialSpecularID = glGetUniformLocation(this->shaderProgram, "MaterialSpecular");
if(LightID ==-1)
cout << "LightID not found ..." << endl;
glm::vec3 lightPos = glm::vec3(0,150,150);
glUniform3f(LightID, lightPos.x, lightPos.y, lightPos.z);
glUniform3f(MaterialAmbientID, MaterialAmbient.x, MaterialAmbient.y, MaterialAmbient.z);
glUniform3f(MaterialSpecularID, MaterialSpecular.x, MaterialSpecular.y, MaterialSpecular.z);
// Get a handle for our "myTextureSampler" uniform
GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler");
if(!TextureID)
cout << "TextureID not found ..." << endl;
glActiveTexture(GL_TEXTURE0);
sf::Texture::bind(texture);
glUniform1i(TextureID, 0);
// 2nd attribute buffer : UV
GLuint vertexUVID = glGetAttribLocation(shaderProgram, "color");
if(vertexUVID==-1)
cout << "vertexUVID not found ..." << endl;
glEnableVertexAttribArray(vertexUVID);
glBindBuffer(GL_ARRAY_BUFFER, color_array_buffer);
glVertexAttribPointer(vertexUVID, 2, GL_FLOAT, GL_FALSE, 0, 0);
GLuint vertexNormal_modelspaceID = glGetAttribLocation(shaderProgram, "normal");
if(!vertexNormal_modelspaceID)
cout << "vertexNormal_modelspaceID not found ..." << endl;
glEnableVertexAttribArray(vertexNormal_modelspaceID);
glBindBuffer(GL_ARRAY_BUFFER, normal_array_buffer);
glVertexAttribPointer(vertexNormal_modelspaceID, 3, GL_FLOAT, GL_FALSE, 0, 0 );
GLint posAttrib;
posAttrib = glGetAttribLocation(shaderProgram, "position");
if(!posAttrib)
cout << "posAttrib not found ..." << endl;
glEnableVertexAttribArray(posAttrib);
glBindBuffer(GL_ARRAY_BUFFER, position_array_buffer);
glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elements_array_buffer);
glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0);
GLuint error;
while ((error = glGetError()) != GL_NO_ERROR) {
cerr << "OpenGL error: " << error << endl;
}
disableShaders();
The code is fine, migrating from SFML to glut doesn't need a lot of changes but you will have to change the textures if you used SFML texture object. The only way you are not seeing anything else than your background changing color is simply because your camera is not looking at your object.
I advise you check the code of your view and or post it.

Efficient way of reading depth values from depth buffer

For an algorithm of mine I need to be able to access the depth buffer. I have no problem at all doing this using glReadPixels, but reading an 800x600 window is extremely slow (From 300 fps to 20 fps)
I'm reading a lot about this and I think dumping the depth buffer to a texture would be faster. I know how to create a texture, but how do I get the depth out?
Creating an FBO and creating the texture from there might be even faster, at the moment I am using an FBO (but still in combination with glReadPixels).
So what is the fastest way to do this?
(I'm probably not able to use GLSL because I don't know anything about it and I don't have much time left to learn, deadlines!)
edit:
Would a PBO work? As described here: http://www.songho.ca/opengl/gl_pbo.html it can go a lot faster but I can not change buffers all the time as in the example.
Edit2:
How would I go about putting the depth data in the PBO? At the moment I do:
glGenBuffersARB(1, &pboId);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboId);
glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, 800*600*sizeof(GLfloat),0, GL_STREAM_READ_ARB);
and right before my readpixels i call glBindbuffer again. The effect is that I read nothing at all. If I disable the PBO's it all works.
Final edit:
I guess I solved it, I had to use:
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboId);
glReadPixels( 0, 0,Engine::fWidth, Engine::fHeight, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));
GLuint *pixels = (GLuint*)glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY);
This gave me a 20 FPS increase. It's not that much but it's something.
So, I used 2 PBO's but I'm still encountering a problem: My code only gets executed once.
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[index]);
std::cout << "Reading pixels" << std::endl;
glReadPixels( 0, 0,Engine::fWidth, Engine::fHeight, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));
std::cout << "Getting pixels" << std::endl;
// glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, 800*600*sizeof(GLfloat), 0, GL_STREAM_DRAW_ARB);
GLfloat *pixels = (GLfloat*)glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY);
int count = 0;
for(int i = 0; i != 800*600; ++i){
std::cout << pixels[i] << std::endl;
}
The last line executes once, but only once, after that it keeps on calling the method (which is normal) but stops at the call to pixels.
I apparently forgot to load glUnMapBuffers, that kinda solved it, though my framerate is slower again..
I decided giving FBO's a go, but I stumbled across a problem:
Initialising FBO:
glGenFramebuffersEXT(1, framebuffers);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffers[0]);
std::cout << "framebuffer generated, id: " << framebuffers[0] << std::endl;
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glGenRenderbuffersEXT(1,renderbuffers);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, renderbuffers[0]);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, 800, 600);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, renderbuffers[0]);
bool status = checkFramebufferStatus();
if(!status)
std::cout << "Could not initialise FBO" << std::endl;
else
std::cout << "FBO ready!" << std::endl;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
My drawing loop:
GLenum errCode;
const GLubyte *errString;
if ((errCode = glGetError()) != GL_NO_ERROR) {
errString = gluErrorString(errCode);
fprintf (stderr, "OpenGL Error: %s\n", errString);
}
++frameCount;
// ----------- First pass to fill the depth buffer -------------------
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffers[0]);
std::cout << "FBO bound" << std::endl;
//Enable depth testing
glEnable(GL_DEPTH_TEST);
glDisable(GL_STENCIL_TEST);
glDepthMask( GL_TRUE );
//Disable stencil test, we don't need that for this pass
glClearStencil(0);
glEnable(GL_STENCIL_TEST);
//Disable drawing to the color buffer
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
//We clear all buffers and reset the modelview matrix
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glLoadIdentity();
//We set our viewpoint
gluLookAt(eyePoint[0],eyePoint[1], eyePoint[2], 0.0,0.0,0.0,0.0,1.0,0.0);
//std::cout << angle << std::endl;
std::cout << "Writing to FBO depth" << std::endl;
//Draw the VBO's, this does not draw anything to the screen, we are just filling the depth buffer
glDrawElements(GL_TRIANGLES, 120, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
After this I call a function that calls glReadPixels()
The function does not even get called. The loop restarts at the function call.
Apparently I solved this as well: I had to use
glReadPixels( 0, 0,Engine::fWidth, Engine::fHeight, GL_DEPTH_COMPONENT,GL_UNSIGNED_SHORT, pixels);
With GL_UNSIGNED_SHORT instead of GL_FLOAT (or any other format for that matter)
The fastest way of doing this is using asynchronous pixel buffer objects, there's a good explanation here:
http://www.songho.ca/opengl/gl_pbo.html
I would render to a FBO and read its depth buffer after the frame has been rendered. PBOs are outdated technology.