For an algorithm of mine I need to be able to access the depth buffer. I have no problem at all doing this using glReadPixels, but reading an 800x600 window is extremely slow (From 300 fps to 20 fps)
I'm reading a lot about this and I think dumping the depth buffer to a texture would be faster. I know how to create a texture, but how do I get the depth out?
Creating an FBO and creating the texture from there might be even faster, at the moment I am using an FBO (but still in combination with glReadPixels).
So what is the fastest way to do this?
(I'm probably not able to use GLSL because I don't know anything about it and I don't have much time left to learn, deadlines!)
edit:
Would a PBO work? As described here: http://www.songho.ca/opengl/gl_pbo.html it can go a lot faster but I can not change buffers all the time as in the example.
Edit2:
How would I go about putting the depth data in the PBO? At the moment I do:
glGenBuffersARB(1, &pboId);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboId);
glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, 800*600*sizeof(GLfloat),0, GL_STREAM_READ_ARB);
and right before my readpixels i call glBindbuffer again. The effect is that I read nothing at all. If I disable the PBO's it all works.
Final edit:
I guess I solved it, I had to use:
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboId);
glReadPixels( 0, 0,Engine::fWidth, Engine::fHeight, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));
GLuint *pixels = (GLuint*)glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY);
This gave me a 20 FPS increase. It's not that much but it's something.
So, I used 2 PBO's but I'm still encountering a problem: My code only gets executed once.
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[index]);
std::cout << "Reading pixels" << std::endl;
glReadPixels( 0, 0,Engine::fWidth, Engine::fHeight, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));
std::cout << "Getting pixels" << std::endl;
// glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, 800*600*sizeof(GLfloat), 0, GL_STREAM_DRAW_ARB);
GLfloat *pixels = (GLfloat*)glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY);
int count = 0;
for(int i = 0; i != 800*600; ++i){
std::cout << pixels[i] << std::endl;
}
The last line executes once, but only once, after that it keeps on calling the method (which is normal) but stops at the call to pixels.
I apparently forgot to load glUnMapBuffers, that kinda solved it, though my framerate is slower again..
I decided giving FBO's a go, but I stumbled across a problem:
Initialising FBO:
glGenFramebuffersEXT(1, framebuffers);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffers[0]);
std::cout << "framebuffer generated, id: " << framebuffers[0] << std::endl;
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glGenRenderbuffersEXT(1,renderbuffers);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, renderbuffers[0]);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, 800, 600);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, renderbuffers[0]);
bool status = checkFramebufferStatus();
if(!status)
std::cout << "Could not initialise FBO" << std::endl;
else
std::cout << "FBO ready!" << std::endl;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
My drawing loop:
GLenum errCode;
const GLubyte *errString;
if ((errCode = glGetError()) != GL_NO_ERROR) {
errString = gluErrorString(errCode);
fprintf (stderr, "OpenGL Error: %s\n", errString);
}
++frameCount;
// ----------- First pass to fill the depth buffer -------------------
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffers[0]);
std::cout << "FBO bound" << std::endl;
//Enable depth testing
glEnable(GL_DEPTH_TEST);
glDisable(GL_STENCIL_TEST);
glDepthMask( GL_TRUE );
//Disable stencil test, we don't need that for this pass
glClearStencil(0);
glEnable(GL_STENCIL_TEST);
//Disable drawing to the color buffer
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
//We clear all buffers and reset the modelview matrix
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glLoadIdentity();
//We set our viewpoint
gluLookAt(eyePoint[0],eyePoint[1], eyePoint[2], 0.0,0.0,0.0,0.0,1.0,0.0);
//std::cout << angle << std::endl;
std::cout << "Writing to FBO depth" << std::endl;
//Draw the VBO's, this does not draw anything to the screen, we are just filling the depth buffer
glDrawElements(GL_TRIANGLES, 120, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
After this I call a function that calls glReadPixels()
The function does not even get called. The loop restarts at the function call.
Apparently I solved this as well: I had to use
glReadPixels( 0, 0,Engine::fWidth, Engine::fHeight, GL_DEPTH_COMPONENT,GL_UNSIGNED_SHORT, pixels);
With GL_UNSIGNED_SHORT instead of GL_FLOAT (or any other format for that matter)
The fastest way of doing this is using asynchronous pixel buffer objects, there's a good explanation here:
http://www.songho.ca/opengl/gl_pbo.html
I would render to a FBO and read its depth buffer after the frame has been rendered. PBOs are outdated technology.
Related
I am creating a color picker OpenGL application for images with ImGUI. I have managed to load an image by loading the image into a glTexImage2D and using ImGUI::Image().
Now I would like to implement a method, which can determine the color of the pixel in case of a left mouse click.
Here is the method I loading the texture, then assigning it to a framebuffer:
bool LoadTextureFromFile(const char *filename, GLuint *out_texture, int *out_width, int *out_height,ImVec2 mousePosition ) {
// Reading the image into a GL_TEXTURE_2D
int image_width = 0;
int image_height = 0;
unsigned char *image_data = stbi_load(filename, &image_width, &image_height, NULL, 4);
if (image_data == NULL)
return false;
GLuint image_texture;
glGenTextures(1, &image_texture);
glBindTexture(GL_TEXTURE_2D, image_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
stbi_image_free(image_data);
glBindTexture(GL_TEXTURE_2D, 0);
*out_texture = image_texture;
*out_width = image_width;
*out_height = image_height;
// Assigning texture to Frame Buffer
unsigned int fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, image_texture, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, image_texture, 0);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE){
std::cout<< "Frame buffer is done."<< std::endl;
}
return true;
}
Unfortunately, the above code results in a completely blank screen. I guess, there is something I missed during setting the framebuffer.
Here is the method, where I would like to sample the framebuffer texture by using the mouse coordinates:
void readPixelFromImage(ImVec2 mousePosition) {
unsigned char pixels[4];
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(GLint(mousePosition.x), GLint(mousePosition.y), 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
std::cout << "r: " << static_cast<int>(pixels[0]) << '\n';
std::cout << "g: " << static_cast<int>(pixels[1]) << '\n';
std::cout << "b: " << static_cast<int>(pixels[2]) << '\n';
std::cout << "a: " << static_cast<int>(pixels[3]) << '\n' << std::endl;
}
Any help is appreciated!
There is indeed something missing in your code:
You set up a new Framebuffer that contains just a single texture buffer. This is okay, so the glCheckFramebufferStatus equals GL_FRAMEBUFFER_COMPLETE. But there is no render buffer attached to your framebuffer. If you want your image rendered on screen, you should use the default framebuffer. This framebuffer is created from your GL context.
However, the documentation says: Default framebuffers cannot change their buffer attachments, [...] https://www.khronos.org/opengl/wiki/Framebuffer. So attaching a texture or renderbuffer to the default FB is certainly not possible. You could, however, generate a new FB as you did, render to it, and finally render the outcome (or blit the buffers) to your default FB. Maybe a good starting point for this technique is https://learnopengl.com/Advanced-Lighting/Deferred-Shading
Moreover, if you intend to just read back rendered values from your GPU, it is more performant to use a renderbuffer instead of a texture. You can even have multiple renderbuffers attached to your framebuffer (as in deferred shading). Example: you could use a second renderbuffer to render an object/instance id (so, the renderbuffer will be single channel integer), and your first renderbuffer will be used for normal drawing. Reading the second renderbuffer with glReadPixels you can directly read which instance was drawn at e.g. the mouse position. This way, you can enable mouse picking very efficiently.
I'm trying to do high-throughput video streaming using OpenGL. I thought I'd figured it all out with my genius programming architecture, but - surprise - when doing more serious tests, I've been stonewalled with a performance problem.
The story goes like this:
It all starts by reserving a stack of PBO's (say, a hundred+ or so):
glGenBuffers(1, &index);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, index);
glBufferData(GL_PIXEL_UNPACK_BUFFER, size, 0, GL_STREAM_DRAW); // reserve n_payload bytes to index/handle pbo_id
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); // unbind (not mandatory)
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, index); // rebind (not mandatory)
payload = (GLubyte*)glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); // release pointer to mapping buffer ** MANDATORY **
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); // unbind ** MANDATORY **
YUV pixel data is copied into PBOs by separate decoder/uploader threads that use a common stack of available PBOs. The "payload" pointers you see above, are accessed from these threads and data is copied (with memcpy) "directly" to the gpu. Once a PBO is used, it is returned to the stack.
I also pre-reserve textures for each separate video stream. I reserve three textures (y, u and v), like this:
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &index);
glBindTexture(GL_TEXTURE_2D, index);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, format, w, h, 0, format, GL_UNSIGNED_BYTE, 0); // no upload, just reserve
glBindTexture(GL_TEXTURE_2D, 0); // unbind
Rendering is done in a "master thread" (remember, the decoder / uploader threads are separate beasts) that reads frames from a fifo queue.
A critical step in rendering is to copy data from PBOs to textures (tex->format is GL_RED):
// y
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo->y_index);
glBindTexture(GL_TEXTURE_2D, tex->y_index); // this is the texture we will manipulate
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, tex->w, tex->h, tex->format, GL_UNSIGNED_BYTE, 0); // copy from pbo to texture
// u
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo->u_index);
glBindTexture(GL_TEXTURE_2D, tex->u_index); // this is the texture we will manipulate
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, tex->w/2, tex->h/2, tex->format, GL_UNSIGNED_BYTE, 0); // copy from pbo to texture
// v
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo->v_index);
glBindTexture(GL_TEXTURE_2D, tex->v_index); // this is the texture we will manipulate
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, tex->w/2, tex->h/2, tex->format, GL_UNSIGNED_BYTE, 0); // copy from pbo to texture
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); // unbind // important!
glBindTexture(GL_TEXTURE_2D, 0); // unbind
And finally, the image is drawn using the OpenGL shading language (which is another story).
The Question : Do you see any OpenGL performance bottlenecks here?
Step (3) seems like a bottleneck, as it starts to consume too much time (up to 10+ milliseconds)!, when I'm trying to do this with several cameras.
Of course, this could be due to something else clogging the OpenGL pipeline - but everything else (glDrawElements, etc.) seems to take max. 1 millisecond.
I've been reading about problems people are having with glTexSubImage2D, but in my case, I'm simply filling the textures from PBOs. This should be lightning fast - right? Could the GL_RED format pose a problem by being non-optimal for the driver?
Another thing: I'm not doing de/reallocating here (I am using the same stack of pre-reserved PBO's), but re-allocating seems to be fast as well.. if I understood correctly this one .. ?
https://www.khronos.org/opengl/wiki/Buffer_Object_Streaming
Any insight highly appreciated..!
P. S. The complete project is here: https://github.com/elsampsa/valkka-core
EDIT 1:
I did some profiling: Every now and then during the streaming, both the PBO=>texture loading (as shown in the code snippet) and glXMakeCurrent go completely crazy and they both consume 10+ milliseconds (!) This happens quite sporadically. I tried to add some glFinish calls after each PBO=>texture load, but with little success (it seemed to stabilize things a bit .. but actually I'm not sure)
EDIT 2:
I am slowly getting there .. Ran some tests where I (a) upload with PBO to GPU and then (b) copy from PBO to texture (like in that sample code). The speed seems to depend on the texture format in "glTexImage2D". I try to match the texture's format and OpenGL internal format, by setting them to GL_RED and GL_RED (or GL_R8), respectively. But that is slow. Instead, if I use GL_RGBA for both, PBO=>TEX is lightning fast.. 100x faster !
Here:
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
it says that
GL_RED : Each element is a single red component. The GL converts it to floating point and assembles it into an RGBA element by attaching 0 for green and blue, and 1 for alpha. Each component is clamped to the range [0,1].
.. but I don't want OpenGL to do that! How can I tell it that it's just plain LUMA, i.e. one-byte-per-pixel and no need to convert/fill it, cause I will just use it in the shader program.
Maybe this is impossible and I should use buffer textures instead (as suggested in the comments) .. ? Buffer textures don't try to convert anything.. they just handle it as raw payload, right?
EDIT 3:
I'm trying to get dma to the texture buffer object:
// let's reserve a TBO
glGenBuffers(1, &tbo_index); // a buffer
glBindBuffer(GL_TEXTURE_BUFFER, tbo_index); // .. what is it
glBufferData(GL_TEXTURE_BUFFER, size, 0, GL_STREAM_DRAW); // .. how much
std::cout << "tbo " << tbo_index << std::endl;
glBindBuffer(GL_TEXTURE_BUFFER, 0); // unbind
// generate a texture
glGenTextures(1, &tex_index);
std::cout << "texture " << tex_index << std::endl;
// let's try to get dma to the texture buffer
glBindBuffer(GL_TEXTURE_BUFFER, tbo_index); // bind
payload = (GLubyte*)glMapBuffer(GL_TEXTURE_BUFFER, GL_WRITE_ONLY); // ** TODO: doesn't work
glUnmapBuffer(GL_TEXTURE_BUFFER); // release pointer to mapping buffer
glBindBuffer(GL_TEXTURE_BUFFER, 0); // unbind
std::cout << "tbo " << tbo_index << " at " << (long unsigned int)payload << std::endl;
Doesn't work.. payload is always a null pointer. glMapBuffer works ok with PBOs though. It should work with TBO's as well.
As part of a 3D mesh viewer I am making in QT with QOpenGLWidget, I need to provide the ability for a user to click a node within the model. To restrict selection to visible nodes only, I have tried to include glReadPixels (GL_DEPTH_COMPONENT) in my selection algorithm.
My problem is that glReadPixels(depth) always returns 0. All the error outputs in the code below return 0 as well. glReadPixels(red) returns correct values:
GLenum err = GL_NO_ERROR;
QTextStream(stdout) << "error before reading gl_red = " << err << endl;
GLfloat winX, winY, myred, mydepth;
winX = mousex;
winY = this->height() - mousey;
glReadPixels(winX,winY,1,1,GL_RED,GL_FLOAT, &myred);
QTextStream(stdout) << "GL RED = " << myred << endl;
err = glGetError();
QTextStream(stdout) << "error after reading gl_red = " << err << endl;
glReadPixels(winX,winY,1,1,GL_DEPTH_COMPONENT,GL_FLOAT, &mydepth);
QTextStream(stdout) << "GL_DEPTH_COMPONENT = " << mydepth << endl;
err = glGetError();
QTextStream(stdout) << "error after reading gl_depth = " << err << endl;
My normal 3D rendering is working fine, I have glEnable(GL_DEPTH_TEST) in my initializeGL() function. At the moment I'm not using any fancy VBOs or VAOs. FaceMeshQualityColor and triVertices are both datatype QVector<QVector3D>. My current face rendering follows the following progression:
shader = shaderVaryingColor;
shader->bind();
shader->setAttributeArray("color", FaceMeshQualityColor.constData());
shader->enableAttributeArray("color");
shader->setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix);
shader->setAttributeArray("vertex", triVertices.constData());
shader->enableAttributeArray("vertex");
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(1,1);
glDrawArrays(GL_TRIANGLES, 0, triVertices.size());
glDisable(GL_POLYGON_OFFSET_FILL);
shader->disableAttributeArray("vertex");
shader->disableAttributeArray("color");
shader->release();
In my main file I explicitly set my OpenGL version to something with glReadPixels(GL_DEPTH_COMPONENT) functionality (as opposed to OpenGL ES 2.0):
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QSurfaceFormat format;
format.setVersion(2, 1);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setDepthBufferSize(32);
QSurfaceFormat::setDefaultFormat(format);
MainWindow w;
w.showMaximized();
return a.exec();
}
Is my problem of glReadPixels(depth) not working somehow related to my treatment of my depth buffer?
Do I need to 'activate' the depth buffer to be able to read from it before I call glReadPixels? Or do I need to have my vertex shader explicitly write depth location to some other object?
QOpenGLWidget works into an underlying FBO and you can't simply read depth component from that FBO if multi sampling is enabled. The easiest solution is to set the samples to zero, so your code will look like this:
QSurfaceFormat format;
format.setVersion(2, 1);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setDepthBufferSize(32);
format.setSamples(0);
QSurfaceFormat::setDefaultFormat(format);
Or you can use multisampling, but an additional FBO will be required without multisampling where the depth buffer can be copied.
class MyGLWidget : public QOpenGLWidget, protected QOpenGLFunctions
{
//
// OTHER WIDGET RELATED STUFF
//
QOpenGLFramebufferObject* mFBO = nullptr;
MyGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//
// DRAW YOUR SCENE HERE!
//
QOpenGLContext *ctx = QOpenGLContext::currentContext();
// FBO must be re-created! is there a way to reset it?
if(mFBO) delete mFBO;
QOpenGLFramebufferObjectFormat format;
format.setSamples(0);
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
mFBO = new QOpenGLFramebufferObject(size(), format);
glBindFramebuffer(GL_READ_FRAMEBUFFER, ctx->defaultFramebufferObject());
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mFBO->handle());
ctx->extraFunctions()->glBlitFramebuffer(0, 0, width(), height(), 0, 0, mFBO->width(), mFBO->height(), GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT, GL_NEAREST);
mFBO->bind(); // must rebind, otherwise it won't work!
float mouseDepth = 1.f;
glReadPixels(mouseX, mouseY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &mouseDepth);
mFBO->release();
}
};
I'm trying to make a very basic example of rendering to the Oculus using their SDK v0.8. All I'm trying to do is render a solid color to both eyes. When I run this, everything appears to initialize correctly. The Oculus shows the health warning message, but all I see is a black screen once the health warning message goes away. What am I doing wrong here?
#define GLEW_STATIC
#include <GL/glew.h>
#define OVR_OS_WIN32
#include <OVR_CAPI_GL.h>
#include <SDL.h>
#include <iostream>
int main(int argc, char *argv[])
{
SDL_Init(SDL_INIT_VIDEO);
SDL_Window* window = SDL_CreateWindow("OpenGL", 100, 100, 800, 600, SDL_WINDOW_OPENGL);
SDL_GLContext context = SDL_GL_CreateContext(window);
//Initialize GLEW
glewExperimental = GL_TRUE;
glewInit();
// Initialize Oculus context
ovrResult result = ovr_Initialize(nullptr);
if (OVR_FAILURE(result))
{
std::cout << "ERROR: Failed to initialize libOVR" << std::endl;
SDL_Quit();
return -1;
}
// Connect to the Oculus headset
ovrSession hmd;
ovrGraphicsLuid luid;
result = ovr_Create(&hmd, &luid);
if (OVR_FAILURE(result))
{
std::cout << "ERROR: Oculus Rift not detected" << std::endl;
SDL_Quit();
return 0;
}
ovrHmdDesc desc = ovr_GetHmdDesc(hmd);
std::cout << "Found " << desc.ProductName << "connected Rift device" << std::endl;
ovrSizei recommenedTex0Size = ovr_GetFovTextureSize(hmd, ovrEyeType(0), desc.DefaultEyeFov[0], 1.0f);
ovrSizei bufferSize;
bufferSize.w = recommenedTex0Size.w;
bufferSize.h = recommenedTex0Size.h;
std::cout << "Buffer Size: " << bufferSize.w << ", " << bufferSize.h << std::endl;
// Generate FBO for oculus
GLuint oculusFbo = 0;
glGenFramebuffers(1, &oculusFbo);
// Create swap texture
ovrSwapTextureSet* pTextureSet = nullptr;
if (ovr_CreateSwapTextureSetGL(hmd, GL_SRGB8_ALPHA8, bufferSize.w, bufferSize.h,&pTextureSet) == ovrSuccess)
{
ovrGLTexture* tex = (ovrGLTexture*)&pTextureSet->Textures[0];
glBindTexture(GL_TEXTURE_2D, tex->OGL.TexId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
// Create ovrLayerHeader
ovrEyeRenderDesc eyeRenderDesc[2];
eyeRenderDesc[0] = ovr_GetRenderDesc(hmd, ovrEye_Left, desc.DefaultEyeFov[0]);
eyeRenderDesc[1] = ovr_GetRenderDesc(hmd, ovrEye_Right, desc.DefaultEyeFov[1]);
ovrLayerEyeFov layer;
layer.Header.Type = ovrLayerType_EyeFov;
layer.Header.Flags = ovrLayerFlag_TextureOriginAtBottomLeft | ovrLayerFlag_HeadLocked;
layer.ColorTexture[0] = pTextureSet;
layer.ColorTexture[1] = pTextureSet;
layer.Fov[0] = eyeRenderDesc[0].Fov;
layer.Fov[1] = eyeRenderDesc[1].Fov;
ovrVector2i posVec;
posVec.x = 0;
posVec.y = 0;
ovrSizei sizeVec;
sizeVec.w = bufferSize.w;
sizeVec.h = bufferSize.h;
ovrRecti rec;
rec.Pos = posVec;
rec.Size = sizeVec;
layer.Viewport[0] = rec;
layer.Viewport[1] = rec;
ovrLayerHeader* layers = &layer.Header;
SDL_Event windowEvent;
while (true)
{
if (SDL_PollEvent(&windowEvent))
{
if (windowEvent.type == SDL_QUIT) break;
}
ovrGLTexture* tex = (ovrGLTexture*)&pTextureSet->Textures[0];
glBindFramebuffer(GL_FRAMEBUFFER, oculusFbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex->OGL.TexId, 0);
glViewport(0, 0, bufferSize.w, bufferSize.h);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
ovr_SubmitFrame(hmd, 0, nullptr, &layers, 1);
SDL_GL_SwapWindow(window);
}
SDL_GL_DeleteContext(context);
SDL_Quit();
return 0;
}
There are a number of problems here
Not initializing ovrLayerEyeFov.RenderPose
Not using ovrSwapTextureSet correctly
Useless calls to SDL_GL_SwapWindow will cause stuttering
Possible undefined behavior when reading the texture while it's still bound for drawing
Not initializing ovrLayerEyeFov.RenderPose
You main problem is that you're not setting the RenderPose member of the ovrLayerEyeFov structure. This member tells the SDK what pose you rendered at and therefore how it should apply timewarp based on the current head pose (which might have changed since you rendered). By not setting this value you're basically giving the SDK a random head pose, which is almost certainly not a valid head pose.
Additionally, ovrLayerFlag_HeadLocked isn't needed for your layer type. It causes the Oculus to display the resulting image in a fixed position relative to your head. It might do what you want, but only if you properly initialize the layer.RenderPose members with the correct values (I'm not sure what those would be in the case of ovrLayerEyeFov, as I've only used the flag in combination with ovrLayerQuad).
What you should do is add the following right after the layer declaration to properly initialize it:
memset(&layer, 0, sizeof(ovrLayerEyeFov));
Then, inside your render loop you should add the following right after the check for a quit event:
ovrTrackingState tracking = ovr_GetTrackingState(hmd, 0, true);
layer.RenderPose[0] = tracking.HeadPose.ThePose;
layer.RenderPose[1] = tracking.HeadPose.ThePose;
This tells the SDK that this image was rendered from the point of view where the head currently is.
Not using ovrSwapTextureSet correctly
Another problem in the code is that you're incorrectly using the texture set. The documentation specifies that when using the texture set, you need to use the texture pointed to by ovrSwapTextureSet.CurrentIndex:
ovrGLTexture* tex = (ovrGLTexture*)(&(pTextureSet->Textures[pTextureSet->CurrentIndex]));
...and then after each call to ovr_SubmitFrame you need to increment ovrSwapTextureSet.CurrentIndex then mod the value by ovrSwapTextureSet.TextureCount like so
pTextureSet->CurrentIndex = (pTextureSet->CurrentIndex + 1) % pTextureSet->TextureCount;
Useless calls to SDL_GL_SwapWindow will cause stuttering
The SDL_GL_SwapWindow(window); call is unnecessary and pointless since you haven't drawn anything to the default framebuffer. Once you move away from drawing a solid color, this call will end up causing judder, since it will block until v-sync (typically at 60hz) causing you to sometimes miss the refersh of the Oculus display. Right now this will be invisible because your scene is just a solid color, but later on when you're rendering objects in 3D, it will cause intolerable judder.
You can use SDL_GL_SwapWindow if you
Ensure v-sync is disabled
Have a mirror texture available to draw to the window. (See the documentation for ovr_CreateMirrorTextureGL)
Possible framebuffer issues
I'm less certain about this one being a serious problem, but I would also suggest unbinding the framebuffer and detaching the Oculus provided texture before sending it to ovr_SubmitFrame(), as I'm not certain that the behavior is well defined when reading from a texture attached to a framebuffer that is currently bound for drawing. It seems to have no impact on my local system, but undefined doesn't mean doesn't work, it just means you can't rely on it to work.
I've updated the sample code and put it here. As a bonus I've modified it so it draws one color on the left eye and a different color on the right eye, as well as setting up the buffer to provide for rendering one half of the buffer for each eye.
I have this issue with my loader:
int loadTexture(char *file)
{
// Load the image
SDL_Surface *tex = IMG_Load(file);
GLuint t;
cout << "Loading image: " << string(file) << "\n";
if (tex) {
glGenTextures(1, &t); // Generating 1 texture
glBindTexture(GL_TEXTURE_2D, t); // Bind the texture
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex->w, tex->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex->pixels); // Map texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Set minifying parameter to linear
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Set magnifying parameter to linear
SDL_FreeSurface(tex); // Free the surface from memory
cout << " > Image loaded: " << string(file) << "\n\n";
return t; // return the texture index
}
else {
cout << " > Failed to load image: " << IMG_GetError() << "\n\n";
SDL_FreeSurface(tex); // Free the surface from memory
return -1; // return -1 in case the image failed to load
}
}
It loads the images just fine but only the last image loaded is used when drawing my objects:
textTest = loadTexture("assets/test_texture_64.png");
textTest2 = loadTexture("assets/test_texture2_64.png");
textTest3 = loadTexture("assets/test_texture3_64.png");
textTest4 = loadTexture("assets/test_texture4_64.png");
Texture files:
http://i.imgur.com/2K9NsZF.png
The program running:
http://i.imgur.com/5FMrA1b.png
Before drawing an object I use glBindTexture(GL_TEXTURE_2D, t) where t is the name of the texture I want to use. I'm new to OpenGL and C++ so I'm having trouble understanding the issue here.
You should check if loadTexture returns different texture IDs when you load the textures. Then you need to be sure that you bind the right textures onto the object using glBindTexture(...) which you say you are doing already.
How are you drawing your object right now? Is multi texturing involved? Be sure to have the right glPushMatrix / glPopMatrix calls before and after drawing your object.
From looking at your loader it looks correct to me although you do not glEnable and glDisable GL_TEXTURE_2D but that should not matter.