Oculus 0.8 SDK Black Screen - opengl

I'm trying to make a very basic example of rendering to the Oculus using their SDK v0.8. All I'm trying to do is render a solid color to both eyes. When I run this, everything appears to initialize correctly. The Oculus shows the health warning message, but all I see is a black screen once the health warning message goes away. What am I doing wrong here?
#define GLEW_STATIC
#include <GL/glew.h>
#define OVR_OS_WIN32
#include <OVR_CAPI_GL.h>
#include <SDL.h>
#include <iostream>
int main(int argc, char *argv[])
{
SDL_Init(SDL_INIT_VIDEO);
SDL_Window* window = SDL_CreateWindow("OpenGL", 100, 100, 800, 600, SDL_WINDOW_OPENGL);
SDL_GLContext context = SDL_GL_CreateContext(window);
//Initialize GLEW
glewExperimental = GL_TRUE;
glewInit();
// Initialize Oculus context
ovrResult result = ovr_Initialize(nullptr);
if (OVR_FAILURE(result))
{
std::cout << "ERROR: Failed to initialize libOVR" << std::endl;
SDL_Quit();
return -1;
}
// Connect to the Oculus headset
ovrSession hmd;
ovrGraphicsLuid luid;
result = ovr_Create(&hmd, &luid);
if (OVR_FAILURE(result))
{
std::cout << "ERROR: Oculus Rift not detected" << std::endl;
SDL_Quit();
return 0;
}
ovrHmdDesc desc = ovr_GetHmdDesc(hmd);
std::cout << "Found " << desc.ProductName << "connected Rift device" << std::endl;
ovrSizei recommenedTex0Size = ovr_GetFovTextureSize(hmd, ovrEyeType(0), desc.DefaultEyeFov[0], 1.0f);
ovrSizei bufferSize;
bufferSize.w = recommenedTex0Size.w;
bufferSize.h = recommenedTex0Size.h;
std::cout << "Buffer Size: " << bufferSize.w << ", " << bufferSize.h << std::endl;
// Generate FBO for oculus
GLuint oculusFbo = 0;
glGenFramebuffers(1, &oculusFbo);
// Create swap texture
ovrSwapTextureSet* pTextureSet = nullptr;
if (ovr_CreateSwapTextureSetGL(hmd, GL_SRGB8_ALPHA8, bufferSize.w, bufferSize.h,&pTextureSet) == ovrSuccess)
{
ovrGLTexture* tex = (ovrGLTexture*)&pTextureSet->Textures[0];
glBindTexture(GL_TEXTURE_2D, tex->OGL.TexId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
// Create ovrLayerHeader
ovrEyeRenderDesc eyeRenderDesc[2];
eyeRenderDesc[0] = ovr_GetRenderDesc(hmd, ovrEye_Left, desc.DefaultEyeFov[0]);
eyeRenderDesc[1] = ovr_GetRenderDesc(hmd, ovrEye_Right, desc.DefaultEyeFov[1]);
ovrLayerEyeFov layer;
layer.Header.Type = ovrLayerType_EyeFov;
layer.Header.Flags = ovrLayerFlag_TextureOriginAtBottomLeft | ovrLayerFlag_HeadLocked;
layer.ColorTexture[0] = pTextureSet;
layer.ColorTexture[1] = pTextureSet;
layer.Fov[0] = eyeRenderDesc[0].Fov;
layer.Fov[1] = eyeRenderDesc[1].Fov;
ovrVector2i posVec;
posVec.x = 0;
posVec.y = 0;
ovrSizei sizeVec;
sizeVec.w = bufferSize.w;
sizeVec.h = bufferSize.h;
ovrRecti rec;
rec.Pos = posVec;
rec.Size = sizeVec;
layer.Viewport[0] = rec;
layer.Viewport[1] = rec;
ovrLayerHeader* layers = &layer.Header;
SDL_Event windowEvent;
while (true)
{
if (SDL_PollEvent(&windowEvent))
{
if (windowEvent.type == SDL_QUIT) break;
}
ovrGLTexture* tex = (ovrGLTexture*)&pTextureSet->Textures[0];
glBindFramebuffer(GL_FRAMEBUFFER, oculusFbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex->OGL.TexId, 0);
glViewport(0, 0, bufferSize.w, bufferSize.h);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
ovr_SubmitFrame(hmd, 0, nullptr, &layers, 1);
SDL_GL_SwapWindow(window);
}
SDL_GL_DeleteContext(context);
SDL_Quit();
return 0;
}

There are a number of problems here
Not initializing ovrLayerEyeFov.RenderPose
Not using ovrSwapTextureSet correctly
Useless calls to SDL_GL_SwapWindow will cause stuttering
Possible undefined behavior when reading the texture while it's still bound for drawing
Not initializing ovrLayerEyeFov.RenderPose
You main problem is that you're not setting the RenderPose member of the ovrLayerEyeFov structure. This member tells the SDK what pose you rendered at and therefore how it should apply timewarp based on the current head pose (which might have changed since you rendered). By not setting this value you're basically giving the SDK a random head pose, which is almost certainly not a valid head pose.
Additionally, ovrLayerFlag_HeadLocked isn't needed for your layer type. It causes the Oculus to display the resulting image in a fixed position relative to your head. It might do what you want, but only if you properly initialize the layer.RenderPose members with the correct values (I'm not sure what those would be in the case of ovrLayerEyeFov, as I've only used the flag in combination with ovrLayerQuad).
What you should do is add the following right after the layer declaration to properly initialize it:
memset(&layer, 0, sizeof(ovrLayerEyeFov));
Then, inside your render loop you should add the following right after the check for a quit event:
ovrTrackingState tracking = ovr_GetTrackingState(hmd, 0, true);
layer.RenderPose[0] = tracking.HeadPose.ThePose;
layer.RenderPose[1] = tracking.HeadPose.ThePose;
This tells the SDK that this image was rendered from the point of view where the head currently is.
Not using ovrSwapTextureSet correctly
Another problem in the code is that you're incorrectly using the texture set. The documentation specifies that when using the texture set, you need to use the texture pointed to by ovrSwapTextureSet.CurrentIndex:
ovrGLTexture* tex = (ovrGLTexture*)(&(pTextureSet->Textures[pTextureSet->CurrentIndex]));
...and then after each call to ovr_SubmitFrame you need to increment ovrSwapTextureSet.CurrentIndex then mod the value by ovrSwapTextureSet.TextureCount like so
pTextureSet->CurrentIndex = (pTextureSet->CurrentIndex + 1) % pTextureSet->TextureCount;
Useless calls to SDL_GL_SwapWindow will cause stuttering
The SDL_GL_SwapWindow(window); call is unnecessary and pointless since you haven't drawn anything to the default framebuffer. Once you move away from drawing a solid color, this call will end up causing judder, since it will block until v-sync (typically at 60hz) causing you to sometimes miss the refersh of the Oculus display. Right now this will be invisible because your scene is just a solid color, but later on when you're rendering objects in 3D, it will cause intolerable judder.
You can use SDL_GL_SwapWindow if you
Ensure v-sync is disabled
Have a mirror texture available to draw to the window. (See the documentation for ovr_CreateMirrorTextureGL)
Possible framebuffer issues
I'm less certain about this one being a serious problem, but I would also suggest unbinding the framebuffer and detaching the Oculus provided texture before sending it to ovr_SubmitFrame(), as I'm not certain that the behavior is well defined when reading from a texture attached to a framebuffer that is currently bound for drawing. It seems to have no impact on my local system, but undefined doesn't mean doesn't work, it just means you can't rely on it to work.
I've updated the sample code and put it here. As a bonus I've modified it so it draws one color on the left eye and a different color on the right eye, as well as setting up the buffer to provide for rendering one half of the buffer for each eye.

Related

Displaying text with SDL TTF with SDL2 and OpenGL

I'm trying to display text using SDL2 TTF and OpenGL. A weird texture appears in the window, it's got the right size and the right position but you can't see any letters.
I've tried using the SDL_CreateRGBSurface() thinking that it might be a cleaner way to recuperate the pixels, but it didn't work either. My surface is never NULL and always passes the validation test.
I use the get_front() function before the while() loop, and the displayMoney() function inside it, right after using glClear(GL_COLOR_BUFFER_BIT).
SDL, TTF and OpenGL are initialized properly and I have created an OpenGL context. Here's the problematic code:
SDL_Surface* get_font()
{
TTF_Font *font;
font = TTF_OpenFont("lib/ariali.ttf", 35);
if (!font) cout << "problem loading font" << endl;
SDL_Color white = {150,200,200};
SDL_Color black = {0,100,0};
SDL_Surface* text = TTF_RenderText_Shaded(font, "MO", white, black);
if (!text) cout << "text not loaded" << endl;
return text;
}
void displayMoney(SDL_Surface* surface)
{
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
glEnable(GL_TEXTURE_2D);
GLuint TextureID = 0;
glGenTextures(1, &TextureID);
glBindTexture(GL_TEXTURE_2D, TextureID);
int Mode = GL_RGB;
if(surface->format->BytesPerPixel == 4) {
Mode = GL_RGBA;
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, Mode, 128, 64, 0, Mode, GL_UNSIGNED_BYTE, surface->pixels);
glPushMatrix();
glTranslated(100,100,0);
glScalef(100,100,0);
glBegin(GL_QUADS);
glTexCoord2f(0, 1); glVertex2f(-0.5f, -0.5f);
glTexCoord2f(1, 1); glVertex2f(0.5f, -0.5f);
glTexCoord2f(1, 0); glVertex2f(0.5f, 0.5f);
glTexCoord2f(0, 0); glVertex2f(-0.5f, 0.5f);
glEnd();
glPopMatrix();
glBindTexture(GL_TEXTURE_2D, 0);
}
#include <SDL2/SDL.h>
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace std;
#include <GL/gl.h>
#include <GL/glu.h>
#include <stb_image/stb_image.h>
#include <SDL2_ttf/SDL_ttf.h>
#include "init.h"
int main(int argc, char **argv) {
SDL_Window* window = init();
if (window == nullptr) {
cout << "Error window init" << endl;
}
if (TTF_Init() < 0) {
cout << "Error TTF init" << endl;
}
SDL_Surface* text = get_font();
while (loop) {
glClear(GL_COLOR_BUFFER_BIT);
displayMoney(text);
...
SDL_GL_SwapWindow(window);
There aren't any error messages. Also, instead of using my surface, I tested my code with an image by using the stbi_load function and it worked perfectly well. The issue therefore seems to be with the SDL part.
EDIT : I've recently found out the surface I get from my text has the following properties: Rmask=Gmask=Bmask=Amask = 0. This is obviously a problem but I've no idea how to fix it...
As stated in SDL_ttf documentation at https://www.libsdl.org/projects/SDL_ttf/docs/SDL_ttf.html#SEC42 ,
Shaded: Create an 8-bit palettized surface and render the given text at high quality with the given font and colors. The 0 pixel value is background, while other pixels have varying degrees of the foreground color from the background color.
So your resulting surface is indexed with 8-bit palette, not an RGBA (also indicated by missing colour masks in surface format, as you've noted). RGBA surface with alpha channel is produced by e.g. TTF_RenderText_Blended, or use different texture format, or perform format conversion. You need to pass surface width/height to glTexImage2D instead of 128/64 constants as surface size may vary.
You also have several resource leaks in question's code: creating new texture on each draw and never deleting it (which is also unnecessary if text isn't changing), and never closing font with TTF_CloseFont.

Texture binding isn't working / C++ / OpenGL [duplicate]

This question already has answers here:
What are the usual troubleshooting steps for OpenGL textures not showing?
(6 answers)
OpenGL object in C++ RAII class no longer works
(2 answers)
Closed 5 years ago.
I'm trying to create a Texture class for my project which initializes and load a texture from an image. The texture loads well but whenever I want to get the texture ID from outside the class by calling GetTexture() function, glIsTexture() does not consider the return value (the texture ID) as a texture anymore. And the face I want to texture stays blank.
Also, I tried to bind the texture with glBindTexture() directly from the Texture class itself with the function Texture::SetActive() but it still doesn't work.
And finally, when I return the texture ID directly from the function, the texture displays correctly.
Is there something I'm missing here ? I don't really know what to look for at this point.
Thanks in advance for your help !
Here's my Texture class :
// Constructor
Texture::Texture(std::string const& texPath) {
SDL_Surface *texture = nullptr, *newFormatTexture = nullptr, *flippedTexture = nullptr;
SDL_PixelFormat tmpFormat;
Uint32 amask, rmask, gmask, bmask;
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
rmask = 0xFF000000;
gmask = 0x00FF0000;
bmask = 0x0000FF00;
amask = 0x000000FF;
#else
rmask = 0x000000FF;
gmask = 0x0000FF00;
bmask = 0x00FF0000;
amask = 0xFF000000;
#endif
if ((texture = IMG_Load(texPath.c_str())) == nullptr) {
std::cerr << "[ERROR] : Could not load texture " << texPath << ". Skipping..." << std::endl;
}
tmpFormat = *(texture->format);
tmpFormat.BitsPerPixel = 32;
tmpFormat.BytesPerPixel = 4;
tmpFormat.Rmask = rmask;
tmpFormat.Gmask = gmask;
tmpFormat.Bmask = bmask;
tmpFormat.Amask = amask;
if ((newFormatTexture = SDL_ConvertSurface(texture, &tmpFormat, SDL_SWSURFACE)) == nullptr) {
std::cerr << "[ERROR] : Couldn't convert surface to given format." << std::endl;
}
if ((flippedTexture = this->FlipSurface(newFormatTexture)) == nullptr) {
std::cerr << "[ERROR] : Couldn't flip surface." << std::endl;
}
glGenTextures(1, &(this->_textureID));
glBindTexture(GL_TEXTURE_2D, this->_textureID);
glTexImage2D(GL_TEXTURE_2D, 0, 4, flippedTexture->w, flippedTexture->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, flippedTexture->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
SDL_FreeSurface(flippedTexture);
SDL_FreeSurface(newFormatTexture);
SDL_FreeSurface(texture);
}
Texture::Texture(unsigned char *texData, int width, int height) {
glGenTextures(1, &(this->_textureID));
glBindTexture(GL_TEXTURE_2D, this->_textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, texData);
glBindTexture(GL_TEXTURE_2D, 0);
}
Texture::~Texture() {
glDeleteTextures(1, &(this->_textureID));
}
Texture Texture::CreateTexture(std::string const& texPath) {
Texture tex(texPath);
return (tex);
}
Texture Texture::CreateTexture(unsigned char *texData, int width, int height) {
Texture tex(texData, width, height);
return (tex);
}
unsigned int Texture::GetTexture() const {
return (this->_textureID);
}
void Texture::SetActive() {
glBindTexture(GL_TEXTURE_2D, this->_textureID);
}
The main class where I load and use my texture :
int WinMain(void) {
Window window("Hello", 640, 480);
double angleX, angleZ;
Texture tex;
int height;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70, (double)640/480, 1, 1000);
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
tex = Texture::CreateTexture("caisse.jpg");
while (!window.Quit()) {
Input::Update();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(3,4,2,0,0,0,0,0,1);
tex.SetActive();
glBegin(GL_QUADS);
glTexCoord2d(0, 1);
glVertex3d(1, 1, 1);
glTexCoord2d(0, 0);
glVertex3d(1, 1, -1);
glTexCoord2d(1, 0);
glVertex3d(-1, 1, -1);
glTexCoord2d(1, 1);
glVertex3d(-1, 1, 1);
glEnd();
glFlush();
window.RefreshDisplay();
}
return (0);
}
EDIT
I solved my problem.
As described in this topic : What are the usual troubleshooting steps for OpenGL textures not showing? , the initialisation of the texture must not be done in the constructor.
Thanks for the help :)
OK, let's look at this:
Texture Texture::CreateTexture(std::string const& texPath) {
Texture tex(texPath);
return (tex);
}
I'm going to assume that this is a static function. So it creates a Texture object on the stack. And tex contains an OpenGL texture object. The function then returns this object.
By the rules of C++, the lifetime of tex is limited to the scope in which it is created. Namely, Texture::CreateTexture. Which means that, at the end of this function, tex will be destroyed by having its destructor invoked.
But since you returned tex, before that happens, tex will be used to initialize the return value of the function. That return value happens to be an object of type Texture, so the compiler will invoke Texture's copy constructor to initialize the return value.
So, right before tex is destroyed, there are two Texture objects: tex itself and the return value of type Texture that was copied from tex. So far, so good.
Now, tex is destroyed. Texture::~Texture calls glDestroyTexture on the texture object contained within it. That destroys the texture created in the constructor. Fine.
So... what happens now? Well, let's back up to the creation of the return value from CreateTexture. I said that it would invoke the copy constructor of Texture to construct it, passing tex as the object to copy from.
You did not post your complete code, but given the nature of the other code you've written, I'd bet that you didn't write a copy constructor for Texture. That's fine, because the compiler will make one for you.
Only that's not fine. Why? Because right before tex gets destroyed, there are two Texture objects. And both of them store the same OpenGL texture object name. How did that happen?
Because you copied the texture object from tex into the return value. That's what the compiler-generated copy constructor does: it copies everything in the class.
So when tex is destroyed, it is destroying the OpenGL texture it just returned.
Texture should not be a copyable class. It should be move-only, just like many resource-containing classes in C++.

Loading OpenGL texture with SOIL in std::thread raises "Integer Division by Zero"

I can load a texture just fine in SOIL/OpenGL normally. No errors, everything works fine:
// this is inside my texture loading code in my texture class
// that i normally use for loading textures
image = SOIL_load_OGL_texture
(
file,
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
NULL
);
However, using that same code and calling it from an std::thread, at the line image = SOIL_load_OGL_texture I get unhandled exception Integer Division by Zero:
void loadMe() {
Texture* abc = new Texture("res/img/office.png");
}
void loadStuff() {
Texture* loading = new Texture("res/img/head.png"); // < always works
loadMe() // < always works
std::thread textures(loadMe); // < always "integer division by zero"
Here's some relevant code from my Texture class:
// inside the class
private:
GLint w, h;
GLuint image;
// loading the texture (called by constructor if filename is given)
void Texture::loadImage(const char* file)
{
image = SOIL_load_OGL_texture
(
file,
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
NULL
);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &w);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &h);
glBindTexture(GL_TEXTURE_2D, 0);
if (image <= 0)
std::cout << file << " failed to load!\n";
else
std::cout << file << " loaded.\n";
glDisable(GL_TEXTURE_2D);
}
It raises the exception exactly at image = SOIL_load_OGL_texture, and when I go into the debugger, I see things like w = -816294792 and w = -816294792, but I guess that just means it hasn't been set yet, as it also shows that in the debugger for loading the other textures.
Also, the SOIL_load_OGL_texture part of the code works fine by itself, outside of the Texture class, even in a std::thread.
Any idea what's going on here?
This is how you do it. Note that, as others have mentioned in the comments, a context needs to be maintained current for every thread that uses GL. What this means is practically there cannot be a GL API call made in multiple threads without making one thread the owner of the GL context. Hence if the intention is to separate the Image loading overhead it is recommended to load the image file into a buffer using a library in a separate thread, then use that buffer to glTexImage2D in the main thread. Till the image is loaded, a dummy texture can be displayed.
I tried checking what platform you are on (see comment above), since I did not see a response, I am assuming Linux for below.
/* Regular GL context creation foo */
/* Regular attribute, uniform, shader creation foo */
/* Create a thread that does loading with SOIL in function SOIL_loader */
std::thread textureloader(SOIL_loader);
/* Wait for loader thread to finish,
thus defeating the purpose of a thread. Ideally,
only the image file read/decode should happen in separate thread */
textureloader.join();
/* Make the GL context current back again in the main thread
for other actions */
glfwMakeContextCurrent((GLFWwindow*)window);
/* Some other foo */
======
And this is the loader thread function:
void SOIL_loader()
{
glfwMakeContextCurrent((GLFWwindow*)window);
SOIL_load_OGL_texture
(
"./img_test.png",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID /* or passed ID */,
NULL
);
GL_CHECK(SOIL);
}
Tested on Ubuntu 14.04, Mesa, and glfw3.

My OpenGL program only uses the last texture loaded (C++)

I have this issue with my loader:
int loadTexture(char *file)
{
// Load the image
SDL_Surface *tex = IMG_Load(file);
GLuint t;
cout << "Loading image: " << string(file) << "\n";
if (tex) {
glGenTextures(1, &t); // Generating 1 texture
glBindTexture(GL_TEXTURE_2D, t); // Bind the texture
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex->w, tex->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex->pixels); // Map texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Set minifying parameter to linear
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Set magnifying parameter to linear
SDL_FreeSurface(tex); // Free the surface from memory
cout << " > Image loaded: " << string(file) << "\n\n";
return t; // return the texture index
}
else {
cout << " > Failed to load image: " << IMG_GetError() << "\n\n";
SDL_FreeSurface(tex); // Free the surface from memory
return -1; // return -1 in case the image failed to load
}
}
It loads the images just fine but only the last image loaded is used when drawing my objects:
textTest = loadTexture("assets/test_texture_64.png");
textTest2 = loadTexture("assets/test_texture2_64.png");
textTest3 = loadTexture("assets/test_texture3_64.png");
textTest4 = loadTexture("assets/test_texture4_64.png");
Texture files:
http://i.imgur.com/2K9NsZF.png
The program running:
http://i.imgur.com/5FMrA1b.png
Before drawing an object I use glBindTexture(GL_TEXTURE_2D, t) where t is the name of the texture I want to use. I'm new to OpenGL and C++ so I'm having trouble understanding the issue here.
You should check if loadTexture returns different texture IDs when you load the textures. Then you need to be sure that you bind the right textures onto the object using glBindTexture(...) which you say you are doing already.
How are you drawing your object right now? Is multi texturing involved? Be sure to have the right glPushMatrix / glPopMatrix calls before and after drawing your object.
From looking at your loader it looks correct to me although you do not glEnable and glDisable GL_TEXTURE_2D but that should not matter.

Efficient way of reading depth values from depth buffer

For an algorithm of mine I need to be able to access the depth buffer. I have no problem at all doing this using glReadPixels, but reading an 800x600 window is extremely slow (From 300 fps to 20 fps)
I'm reading a lot about this and I think dumping the depth buffer to a texture would be faster. I know how to create a texture, but how do I get the depth out?
Creating an FBO and creating the texture from there might be even faster, at the moment I am using an FBO (but still in combination with glReadPixels).
So what is the fastest way to do this?
(I'm probably not able to use GLSL because I don't know anything about it and I don't have much time left to learn, deadlines!)
edit:
Would a PBO work? As described here: http://www.songho.ca/opengl/gl_pbo.html it can go a lot faster but I can not change buffers all the time as in the example.
Edit2:
How would I go about putting the depth data in the PBO? At the moment I do:
glGenBuffersARB(1, &pboId);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboId);
glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, 800*600*sizeof(GLfloat),0, GL_STREAM_READ_ARB);
and right before my readpixels i call glBindbuffer again. The effect is that I read nothing at all. If I disable the PBO's it all works.
Final edit:
I guess I solved it, I had to use:
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboId);
glReadPixels( 0, 0,Engine::fWidth, Engine::fHeight, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));
GLuint *pixels = (GLuint*)glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY);
This gave me a 20 FPS increase. It's not that much but it's something.
So, I used 2 PBO's but I'm still encountering a problem: My code only gets executed once.
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[index]);
std::cout << "Reading pixels" << std::endl;
glReadPixels( 0, 0,Engine::fWidth, Engine::fHeight, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));
std::cout << "Getting pixels" << std::endl;
// glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, 800*600*sizeof(GLfloat), 0, GL_STREAM_DRAW_ARB);
GLfloat *pixels = (GLfloat*)glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY);
int count = 0;
for(int i = 0; i != 800*600; ++i){
std::cout << pixels[i] << std::endl;
}
The last line executes once, but only once, after that it keeps on calling the method (which is normal) but stops at the call to pixels.
I apparently forgot to load glUnMapBuffers, that kinda solved it, though my framerate is slower again..
I decided giving FBO's a go, but I stumbled across a problem:
Initialising FBO:
glGenFramebuffersEXT(1, framebuffers);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffers[0]);
std::cout << "framebuffer generated, id: " << framebuffers[0] << std::endl;
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glGenRenderbuffersEXT(1,renderbuffers);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, renderbuffers[0]);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, 800, 600);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, renderbuffers[0]);
bool status = checkFramebufferStatus();
if(!status)
std::cout << "Could not initialise FBO" << std::endl;
else
std::cout << "FBO ready!" << std::endl;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
My drawing loop:
GLenum errCode;
const GLubyte *errString;
if ((errCode = glGetError()) != GL_NO_ERROR) {
errString = gluErrorString(errCode);
fprintf (stderr, "OpenGL Error: %s\n", errString);
}
++frameCount;
// ----------- First pass to fill the depth buffer -------------------
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffers[0]);
std::cout << "FBO bound" << std::endl;
//Enable depth testing
glEnable(GL_DEPTH_TEST);
glDisable(GL_STENCIL_TEST);
glDepthMask( GL_TRUE );
//Disable stencil test, we don't need that for this pass
glClearStencil(0);
glEnable(GL_STENCIL_TEST);
//Disable drawing to the color buffer
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
//We clear all buffers and reset the modelview matrix
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glLoadIdentity();
//We set our viewpoint
gluLookAt(eyePoint[0],eyePoint[1], eyePoint[2], 0.0,0.0,0.0,0.0,1.0,0.0);
//std::cout << angle << std::endl;
std::cout << "Writing to FBO depth" << std::endl;
//Draw the VBO's, this does not draw anything to the screen, we are just filling the depth buffer
glDrawElements(GL_TRIANGLES, 120, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
After this I call a function that calls glReadPixels()
The function does not even get called. The loop restarts at the function call.
Apparently I solved this as well: I had to use
glReadPixels( 0, 0,Engine::fWidth, Engine::fHeight, GL_DEPTH_COMPONENT,GL_UNSIGNED_SHORT, pixels);
With GL_UNSIGNED_SHORT instead of GL_FLOAT (or any other format for that matter)
The fastest way of doing this is using asynchronous pixel buffer objects, there's a good explanation here:
http://www.songho.ca/opengl/gl_pbo.html
I would render to a FBO and read its depth buffer after the frame has been rendered. PBOs are outdated technology.