OpenGL lazy function(opengl function is delayed only at first frame) - opengl

At 'first' frame of my application, I don't know why 'glGetTextureLevelParameterivEXT()' function is delayed. After first frame, this phenomenon doesn't occur.
That function is called in framebuffer bind method. Also, this phenomenon only occurs at summed-area table rendering 2nd pass using ping-pong technique. In this pass, render method exchanges src and dst textures. After calling glFramebufferTexture2D() and glDrawBuffers(), width() calls 'glGetTextureLevelParameterivEXT()' function. I already replaced this code constant. But, next function was also delayed...... Strangely, this phenomenon is only occurred at 'first' frame of my application.
Here is my code in framebuffer bind method.
uint numDrawBuffers=0;
for( int k=0; k < MAX_COLOR_ATTACHMENTS; k++ )
{
gl::Texture* t=textureList[k]; GLint L=t?layerList[k]:0, M=t?mipLevelList[k]:0; GLuint tex=t?t->ID:0;
bool bnone = activeTargets[k]==NULL&&t==NULL;
GLenum target = activeTargets[k] = (t?t->target:activeTargets[k]); if(t) numDrawBuffers++;
if(bnone){ /* do nothing */ }
else if(target==GL_TEXTURE_1D) glFramebufferTexture1D( GL_FRAMEBUFFER, drawBuffers[k], target, tex, M );
else if(target==GL_TEXTURE_2D) glFramebufferTexture2D( GL_FRAMEBUFFER, drawBuffers[k], target, tex, M );
else if(target==GL_TEXTURE_3D) glFramebufferTextureLayer( GL_FRAMEBUFFER, drawBuffers[k], tex, M, L );
else if(target==GL_TEXTURE_1D_ARRAY) glFramebufferTextureLayer( GL_FRAMEBUFFER, drawBuffers[k], tex, M, L );
else if(target==GL_TEXTURE_2D_ARRAY) glFramebufferTextureLayer( GL_FRAMEBUFFER, drawBuffers[k], tex, M, L );
if(t==NULL) activeTargets[k] = NULL;
}
// if nothing bound, unbind depth buffer and return
if(numDrawBuffers==0){ glBindRenderbuffer(GL_RENDERBUFFER,0); glBindFramebuffer(GL_FRAMEBUFFER,0); return; }
else glDrawBuffers( numDrawBuffers, drawBuffers );
GLint width=t0->width(mipLevel0), height=t0->height(mipLevel0);

Related

OpenGL : reading color buffer

I attached 4 color buffers to a framebuffer and render in each of them. Each color buffer has the size of the window. I'm trying to read the color of the pixels of one of these color buffers using the coordinates of the mouse pointer.
mouse move event handler
void mouseMoveEvent(QMouseEvent *event)
{
int x = event->pos().x();
int y = event->pos().y();
makeCurrent();
glBindFramebuffer(GL_READ_FRAMEBUFFER, FBOIndex::GEOMETRY);
{
// I save the values I'm interested in in the attachment GL_COLOR_ATTACHMENT3
// but I always get 0 from any other attachment I try
glReadBuffer(GL_COLOR_ATTACHMENT3);
QVector<GLubyte> pixel(3);
glReadPixels(x, geometry().height() - y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &(pixel[0]));
QString PixelColor = QColor(pixel[0], pixel[1], pixel[2]).name();
qDebug() << PixelColor; // => always 0
}
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebufferObject());
doneCurrent();
}
But for every color buffer I always read the value 0.
The color buffers are written correctly during the rendering phase, I tested each of them by displaying the texture to which they are attached. I also tested the pixel-reading selected by the mouse pointer to the default framebuffer and it works correctly.
Where am I wrong?
Thanks!
EDIT
The seemingly strange thing is that, if I use a "dedicated" framebuffer, I can correctly read the values stored in the texture.
void mouseMoveEvent(QMouseEvent *event)
{
int x = event->pos().x();
int y = event->pos().y();
GLuint fbo;
makeCurrent();
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
{
GLuint texture = textures[TextureIndex::COLOUR];
glBindTexture(GL_TEXTURE_2D, texture);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
QVector<GLubyte> pixel(3);
glReadPixels(x, geometry().height() - y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &(pixel[0]));
QString PixelColor = QColor(pixel[0], pixel[1], pixel[2]).name();
qDebug() << PixelColor; // => correct value
}
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebufferObject());
glDeleteFramebuffers(1, &fbo);
doneCurrent();
}
But clearly it seems useless to use another framebuffer when I already have one with exactly the information I need.
I also tried to read directly the values of the texture (as suggested by #Spektre), but also in this case I always get 0.
void mouseMoveEvent(QMouseEvent *event)
{
int x = event->pos().x();
int y = event->pos().y();
makeCurrent();
{
GLuint texture = textures[TextureIndex::COLOUR];
glBindTexture(GL_TEXTURE_2D, texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, x, geometry().height() - y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &(pixel[0]));
QString PixelColor = QColor(pixel[0], pixel[1], pixel[2]).name();
qDebug() << PixelColor; // => always 0
}
doneCurrent();
}
My approach is correct, but I was not binding to the correct framebuffer.
FBOIndex::GEOMETRY is an enum value that I use to index a FBOs array, where I store all the framebuffer object names, so in general it is not a correct framebuffer object name.
I have defined a method addFBO(index) that creates a framebuffer and stores it at the position index in the FBOs array. The method returns the framebuffer object name of the generated framebuffer. If a framebuffer already exists at the position index, then the method simply returns the associated framebuffer object name.
So, by changing the code in the following way, I finally get the desired result.
void mouseMoveEvent(QMouseEvent *event)
{
int x = event->pos().x();
int y = event->pos().y();
makeCurrent();
glBindFramebuffer(GL_READ_FRAMEBUFFER, addFBO(FBOIndex::GEOMETRY));
{
glReadBuffer(GL_COLOR_ATTACHMENT3);
QVector<GLubyte> pixel(3);
glReadPixels(x, geometry().height() - y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &(pixel[0]));
QString PixelColor = QColor(pixel[0], pixel[1], pixel[2]).name();
qDebug() << PixelColor; // => correct value
}
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebufferObject());
doneCurrent();
}

deferred rendering - Renderbuffer vs Texture

So, I've been reading about this, and I still haven't found a conclusion. Some examples use textures as their render targets, some people use renderbuffers, and some use both!
For example, using just textures:
// Create the gbuffer textures
glGenTextures(ARRAY_SIZE_IN_ELEMENTS(m_textures), m_textures);
glGenTextures(1, &m_depthTexture);
for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures) ; i++) {
glBindTexture(GL_TEXTURE_2D, m_textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0);
}
both:
glGenRenderbuffersEXT ( 1, &m_diffuseRT );
glBindRenderbufferEXT ( GL_RENDERBUFFER_EXT, m_diffuseRT );
glRenderbufferStorageEXT ( GL_RENDERBUFFER_EXT, GL_RGBA, m_width, m_height );
glFramebufferRenderbufferEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, m_diffuseRT );
glGenTextures ( 1, &m_diffuseTexture );
glBindTexture ( GL_TEXTURE_2D, m_diffuseTexture );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
// Attach the texture to the FBO
glFramebufferTexture2DEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, m_diffuseTexture, 0 );
What's the difference? What's the point of creating a texture, a render buffer, and then assign one to the other? After you successfully supply a texture with an image, it's got its memory allocated, so why does one need to bind it to a render buffer?
Why would one use textures or renderbuffers? What would be the advantages?
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
EDIT:
So, my current code for a GBuffer is this:
enum class GBufferTextureType
{
Depth = 0,
Position,
Diffuse,
Normal,
TexCoord
};
.
.
.
glGenFramebuffers ( 1, &OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
glBindFramebuffer ( GL_FRAMEBUFFER, OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
uint32_t TextureGLIDs[5];
glGenTextures ( 5, TextureGLIDs );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
// Create the depth texture
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, In_Dimensions.x, In_Dimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth], 0 );
// Create the color textures
for ( unsigned cont = 1; cont < 5; ++cont )
{
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[cont] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGB32F, In_Dimensions.x, In_Dimensions.y, 0, GL_RGB, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + cont, GL_TEXTURE_2D, TextureGLIDs[cont], 0 );
}
// Specify draw buffers
GLenum DrawBuffers[4];
for ( unsigned cont = 0; cont < 4; ++cont )
DrawBuffers[cont] = GL_COLOR_ATTACHMENT0 + cont;
glDrawBuffers ( 4, DrawBuffers );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
GLenum Status = glCheckFramebufferStatus ( GL_FRAMEBUFFER );
if ( Status != GL_FRAMEBUFFER_COMPLETE )
{
Delete();
return false;
}
Dimensions = In_Dimensions;
// Unbind
glBindFramebuffer ( GL_FRAMEBUFFER, 0 );
Is this the way to go?
I still have to write the corresponding shaders...
What's the point of creating a texture, a render buffer, and then assign one to the other?
That's not what's happening. But that's OK, because that second example code is errant nonsense. The glFramebufferTexture2DEXT is overriding the binding from glFramebufferRenderbufferEXT. The renderbuffer is never actually used after it is created.
If you found that code online somewhere, I strongly advise you to disregard anything that source told you about OpenGL development. Though I would advise that anyway, since it's using the "EXT" extension functions in 2016, almost a decade since core FBOs became available.
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
That is entirely the point of them: you use a renderbuffer for images that you don't want to read from. That's not useful for deferred rendering, since you really do want to read from them.
But imagine if you're generating a reflection image of a scene, which you will later use as a texture in your main scene. Well, to render the reflection scene, you need a depth buffer. But you're not going to read from that depth buffer (not as a texture, at any rate); you need a depth buffer for depth testing. But the only image you're going to read from after is the color image.
So you would make the depth buffer a renderbuffer. That tells the implementation that the image can be put into whatever storage is most efficient for use as a depth buffer, without having to worry about read-back performance. This may or may not have a performance impact. But at the very least, it won't be any slower than using a texture.
Most rendering scenarios need a depth and/or stencil buffer, though it is rare that you would ever need to sample the data stored in the stencil buffer from a shader.
It would be impossible to do depth/stencil tests if your framebuffer did not have a location to store these data and any render pass that uses these fragment tests requires a framebuffer with the appropriate images attached.
If you are not going to use the depth/stencil buffer data in a shader, a renderbuffer will happily satisfy storage requirements for fixed-function fragment tests. Renderbuffers have fewer format restrictions than textures do, particularly if we detour this discussion to multisampling.
D3D10 introduced support for multisampled color textures but omitted multisampled depth textures; D3D10.1 later fixed that problem and GL3.0 was finalized after D3D10's initial design oversight was corrected.
Pre-GL3 / D3D10.1 design would manifest itself in GL as a multisampled framebuffer object that allows either texture or renderbuffer color attachments but forces you to use a renderbuffer for the depth attachment.
Renderbuffers are ultimately the lowest common denominator for storage, they will get you through tough jams on feature-limited hardware. You can actually blit the data stored in a renderbuffer into a texture in some situations where you could not draw directly into the texture.
To that end, you can resolve a multisampled renderbuffer into a single-sampled texture by blitting from one framebuffer to another. This is implicit multisampling, and it (would) allow you to use the anti-aliased results of a previous render pass with a standard texture lookup. Unfortunately it is thoroughly useless for anti-aliasing in deferred shading--you need explicit multisample resolve for that.
Nonetheless, it is incorrect to say that a renderbuffer is not readable; it is in every sense of the word, but since your goal is deferred shading, would require additional GL commands to copy the data into a texture.

OpenGL FBO with MRT writing to back buffer

I have a confusing situation in OpenGL 3.3 on the Mac. I have created an FBO with five attachment points sized at 512x512 apiece. I constructed a shader that writes to gl_FragData[0-4] for diffuse, normal, position, specular and emissive for my geometry. When I render the scene the back buffer AND the render targets are being updated even though I’ve only bound the FBO!
Here’s some code:
void OpenGLESDriver::setFrameBufferAttachments( u32 nAttachments, const u32* aAttachments ){
pushText( "setFrameBufferAttachments" );
#if USE_MRT
GLint max;
glGetIntegerv( GL_MAX_DRAW_BUFFERS, &max );
GLenum aBuffers[max];
if( nAttachments > max ){
nAttachments = max;
}
for( u32 i=0; i<nAttachments; ++i ){
aBuffers[i] = GL_COLOR_ATTACHMENT0+aAttachments[i];
}
for( u32 i=nAttachments; i<max; ++i ){
aBuffers[i] = GL_NONE;
}
glDrawBuffers( max, aBuffers );
glAssert();
#else
glDrawBuffer( GL_COLOR_ATTACHMENT0+aAttachments[0] );
glAssert();
#endif
popText();
}
And the FBO binder:
bool OpenGLESDriver::setFrameBuffer( const FrameBuffer::handle& hFrameBuffer ){
if( hFrameBuffer ){
pushText( "setFrameBuffer" );
glBindFramebuffer( GL_FRAMEBUFFER, hFrameBuffer->toFBO() );
glAssert();
if( !hFrameBuffer->toColorTargets().empty() ){
u32 nAttachments = hFrameBuffer->toColorTargets().size();
u32 aAttachments[nAttachments];
for( u32 i=0; i<nAttachments; ++i ){
aAttachments[i] = i;
}
setFrameBufferAttachments( nAttachments, aAttachments );
}else{
setFrameBufferAttachments( 0, 0 );
}
int w = hFrameBuffer->toDepthTexture()->toWidth();
int h = hFrameBuffer->toDepthTexture()->toHeight();
glViewport( 0, 0, w, h );
glAssert();
//clear out all texture stages because we don't want a left over
//frame buffer texture being bound to the shader.
for( u32 i=0; i<Material::kMaxSamplers; ++i ){
setTextureStage( i, 0 );
}
popText();
return true;
}
return false;
}
I create the FBO with:
FrameBuffer::handle OpenGLESDriver::createFrameBuffer( const FrameBuffer::ColorTargets& vColorTargets, const DepthTarget::handle& hDT ){
//--------------------------------------------------------------------
// Save off default FBO.
//--------------------------------------------------------------------
if( s_iFBOMaster < 0 ){
glGetIntegerv( GL_FRAMEBUFFER_BINDING, &s_iFBOMaster );
glAssert();
}
//--------------------------------------------------------------------
// Generate frame buffer object.
//--------------------------------------------------------------------
GLuint fbo;
glGenFramebuffers( 1, &fbo );
glAssert();
glBindFramebuffer( GL_FRAMEBUFFER, fbo );
glAssert();
//--------------------------------------------------------------------
// Attach color RBO.
//--------------------------------------------------------------------
FrameBuffer::ColorTargets::const_iterator itCT = vColorTargets.getIterator();
u32 mrtIndex = 0;
while( itCT ){
const ColorTarget::handle& hCT = itCT++;
if( !hCT ){
continue;
}
if( hCT->toTexID() ){
glFramebufferTexture2D(
GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0+mrtIndex,
GL_TEXTURE_2D,
hCT->toTexID(),
0 );
}else if( hCT->toRBO() ){
glFramebufferRenderbuffer(
GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0+mrtIndex,
GL_RENDERBUFFER,
hCT->toRBO() );
}else{
DEBUG_ASSERT_ALWAYS( "No color texture or RBO to attach!" );
}
glAssert();
++mrtIndex;
if( !checkFBStatus() ){
e_log( "GL", "Couldn't create color attachment!" );
hCT.as<ColorTarget>()->toFlags()->bFailed = true;
}
}
//--------------------------------------------------------------------
// Attach depth RBO.
//--------------------------------------------------------------------
if( hDT ){
if( hDT->toTexID() ){
glFramebufferTexture2D(
GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_TEXTURE_2D,
hDT->toTexID(),
0 );
}else if( hDT->toRBO() ){
glFramebufferRenderbuffer(
GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,
hDT->toRBO() );
}else{
DEBUG_ASSERT_ALWAYS( "No depth texture or RBO to attach!" );
}
glAssert();
if( !checkFBStatus() ){
e_log( "GL", "Couldn't create depth attachment!" );
hDT.as<DepthTarget>()->toFlags()->bFailed = true;
}
}
//--------------------------------------------------------------------
// New handle.
//--------------------------------------------------------------------
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
glAssert();
FrameBuffer::handle hFrameBuffer = e_new( FrameBuffer );
hFrameBuffer->setColorTargets( vColorTargets );
hFrameBuffer->setDepthTarget( hDT );
hFrameBuffer->setFBO( u32( fbo ));
return hFrameBuffer;
}
And I go back to the back buffer with:
void OpenGLESDriver::setDefaultTarget(){
pushText( "setDefaultTarget" );
glBindFramebuffer( GL_FRAMEBUFFER, 0 );//s_iFBOMaster );
glAssert();
glViewport( 0, 0, IEngine::cxView(), IEngine::cyView() );
glAssert();
popText();
}
So the final rendering code looks like:
pushText( "Render MRT pass" );
if( setFrameBuffer( m_tPostFx.buffers[0] )){
setColorMask( true, true, true, true );
clearZ();
enableZBuffer( false );
setColor( color );
clearMRT( m_tPostFx.clearMRTShader );
enableZBuffer( true );
drawMRTPass();
}
popText();
And for some reason the back buffer is being rendered to as well as the FBO. I must be missing something but haven’t got a clue what. Can anyone see what I’m doing wrong?
After some poking around I finally found the answer. My renderer uses a vector of RenderNode objects which I fill up during rendering. I wasn't clearing that vector after I finished my post-effect. For some reason that unfilled vector was being rendered with the next FBO target, which drew all the first lot of geometry again in the second FBO. By clearing the vector at the beginning of the render pass and at the end I got rid of the problem. I'm still trying to track down who was committing all the render nodes again. Hopefully the code I pasted will help anyone who's looking to do FBOs because it does work after all. :)

OpenGL Applying Texture to Tessellation

I'm trying to take a concave polygon and apply an image to it as a texture. The polygon can have multiple contours, both internal holes and external "islands". It can be any shape, but will be smaller than the image and will fit inside it. It does not necessarily touch the edges of the image.
I've successfully displayed the tessellated polygon, and textured a simple square, but can't get the two to work together.
Here's how I'm loading the texture:
GLuint texture;
int width, height;
BYTE * data;
FILE * file;
// open texture data
file = fopen( filename, "rb" );
if ( file == NULL ) return 0;
// allocate buffer
width = 256;
height = 256;
data = (BYTE *)malloc( width * height * 3 );
// read texture data
fread( data, width * height * 3, 1, file );
fclose( file );
glGenTextures( 1, &texture );
glBindTexture( GL_TEXTURE_2D, texture );
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrap ? GL_REPEAT : GL_CLAMP );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrap ? GL_REPEAT : GL_CLAMP );
gluBuild2DMipmaps( GL_TEXTURE_2D, 3, width,height, GL_RGB, GL_UNSIGNED_BYTE, data );
free( data );
return texture;
Here's the tessellation function:
GLuint tessellate1()
{
GLuint id = glGenLists(1); // create a display list
if(!id) return id; // failed to create a list, return 0
GLUtesselator *tess = gluNewTess(); // create a tessellator
if(!tess) return 0; // failed to create tessellation object, return 0
GLdouble quad1[4][3] = { {-1,3,0}, {0,0,0}, {1,3,0}, {0,2,0} };
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, GLTexture::LoadTextureRAW("texture.raw", true));
// register callback functions
gluTessCallback(tess, GLU_TESS_BEGIN, (void (CALLBACK *)())tessBeginCB);
gluTessCallback(tess, GLU_TESS_END, (void (CALLBACK *)())tessEndCB);
gluTessCallback(tess, GLU_TESS_ERROR, (void (CALLBACK *)())tessErrorCB);
gluTessCallback(tess, GLU_TESS_VERTEX, (void (CALLBACK *)())tessVertexCB);
glNewList(id, GL_COMPILE);
glColor3f(1,1,1);
gluTessBeginPolygon(tess, 0); // with NULL data
gluTessBeginContour(tess);
gluTessVertex(tess, quad1[0], quad1[0]);
gluTessVertex(tess, quad1[1], quad1[1]);
gluTessVertex(tess, quad1[2], quad1[2]);
gluTessVertex(tess, quad1[3], quad1[3]);
gluTessEndContour(tess);
gluTessEndPolygon(tess);
glEndList();
gluDeleteTess(tess); // delete after tessellation
glDisable(GL_TEXTURE_2D);
setCamera(0, 0, 5, 0, 0, 0);
return id; // return handle ID of a display list
}
Here's the tessellation vertex callback function:
// cast back to double type
const GLdouble *ptr = (const GLdouble*)data;
double dImageX = -1, dImageY = -1;
//hardcoded extents of the polygon for the purposes of testing
int minX = 607011, maxX = 616590;
int minY = 4918219, maxY = 4923933;
//get the % coord of the texture for a poly vertex. Assumes image and poly bounds are the same for the purposes of testing
dImageX = (ptr[0] - minX) / (maxX - minX);
dImageY = (ptr[1] - minY) / (maxY - minY);
glTexCoord2d(dImageX, dImageY);
glVertex2d(ptr[0], ptr[1]);
And here's the display callback:
void displayCB()
{
// clear buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// save the initial ModelView matrix before modifying ModelView matrix
glPushMatrix();
// tramsform camera
glTranslatef(0, 0, cameraDistance);
glRotatef(cameraAngleX, 1, 0, 0); // pitch
glRotatef(cameraAngleY, 0, 1, 0); // heading
// draw meshes
glCallList(listId1); //id of the tessellated poly
// draw info messages
showInfo();
glPopMatrix();
glutSwapBuffers();
}
The results of this are a correctly drawn polygon with no texture applied.
// init
glGenTextures( 1, &texture );
// vertex callback
glBindTexture(GL_TEXTURE_2D, 1);
I don't think the first ID returned by glGenTextures() is required to be 1.
Try using texture instead of 1 in your glBindTexture() call.
Also, there's really no reason to enable texturing and re-bind the texture for every vertex. Just do it once before you call into the tesselator.
You're not capturing the texture binding and Enable inside the display list, so it's not going to be taken into account when you replay it. So, either:
Capture the BindTexture and Enable inside the display list, or
BindTexture and Enable(TEXTURE_2D) before calling CallList
The problem was the glDisable(GL_TEXTURE_2D) call in the tessellation function. After removing it, the texture was applied correctly.

Tile Map Usage Much CPU With OpenGL and SDL

I been working in a method to draw a map based on tiles with OpenGL and SDL. And I finally coded but when I execute the basic program where it draw a tile map of 25x16, and I check the use of CPU, it says that consume 25% but without drawing the map consume by much 1% of CPU.
So exists another method to draw the map or why is the use of CPU so high.
This is the code for drawing the map.
void CMapManager::drawMap(Map *map)
{
vector<ImagePtr> tempImages = CGameApplication::getInstance()->getGameApp()->getImages();
GLuint texture = tempImages.at(1)->getTexture();
glColor3f(1.0f, 1.0f, 1.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture( GL_TEXTURE_2D, texture );
glBegin( GL_QUADS );
for (int i = 0; i < map->getHeight(); i++)
{
for (int j = 0; j < map->getWidth(); j++)
{
ImagePtr imgDraw = tempImages.at(map->getMapTiles()[i][j]->getTypeTile());
glTexCoord2i( 0, 0 );
glVertex3f( imgDraw->getPosX() + (imgDraw->getWidth()*j), imgDraw->getPosY() + (imgDraw->getHeight()*i), 0.f );
//Bottom-left vertex (corner)
glTexCoord2i( 1, 0 );
glVertex3f( imgDraw->getOffsetX() + (imgDraw->getWidth()*j), imgDraw->getPosY() + (imgDraw->getHeight()*i), 0.f );
//Bottom-right vertex (corner)
glTexCoord2i( 1, 1 );
glVertex3f( imgDraw->getOffsetX() + (imgDraw->getWidth()*j), imgDraw->getOffsetY() + (imgDraw->getHeight()*i), 0.f );
//Top-right vertex (corner)
glTexCoord2i( 0, 1 );
glVertex3f( imgDraw->getPosX() + (imgDraw->getWidth()*j), imgDraw->getOffsetY() + (imgDraw->getHeight()*i), 0.f );
}
}
glEnd();
glDisable(GL_BLEND);
}
And in this method I call the function:
void CGameApplication::renderApplication()
{
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glEnable(GL_TEXTURE_2D);
vector<ImagePtr> tempImages = GApp->getImages();
vector<ImagePtr>::iterator iterImage;
for (iterImage = tempImages.begin(); iterImage != tempImages.end(); ++iterImage)
{
CImageM->drawSprites( (*iterImage)->getTexture(), (*iterImage)->getPosX(), (*iterImage)->getPosY(),
(*iterImage)->getOffsetX(), (*iterImage)->getOffsetY() );
}
vector<TextPtr> tempTexts = GApp->getTexts();
vector<TextPtr>::iterator iterText;
for (iterText = tempTexts.begin(); iterText != tempTexts.end(); ++iterText)
{
CTextM->drawFonts( (*iterText) );
}
CMapM->drawMap(GApp->getCurrentMap());
glDisable(GL_TEXTURE_2D);
}
I already set a Timer that after this function:
GameApplication->getCKeyboardHandler()->inputLogic();
GameApplication->renderApplication();
SDL_GL_SwapBuffers();
GameApplication->getGameApp()->getTimer()->delay();
And the delay function is:
void Timer::delay()
{
if( this->getTicks() < 1000 / FRAMES_PER_SECOND )
{
SDL_Delay( ( 1000 / FRAMES_PER_SECOND ) - this->getTicks() );
}
}
The const FRAMES_PER_SECOND it's 5 in this moment.
And the function for convert image to GL texture is:
GLuint CImageManager::imageToGLTexture(std::string name)
{
GLuint texture;
SDL_Surface *surface;
GLenum texture_format;
GLint nOfColors;
if ( (surface = IMG_Load(name.c_str())) ) {
// Check that the image's width is a power of 2
if ( (surface->w & (surface->w - 1)) != 0 ) {
printf("warning: image.bmp's width is not a power of 2\n");
}
// Also check if the height is a power of 2
if ( (surface->h & (surface->h - 1)) != 0 ) {
printf("warning: image.bmp's height is not a power of 2\n");
}
// get the number of channels in the SDL surface
nOfColors = surface->format->BytesPerPixel;
if (nOfColors == 4) // contains an alpha channel
{
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGBA;
else
texture_format = GL_BGRA_EXT;
}
else if (nOfColors == 3) // no alpha channel
{
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGB;
else
texture_format = GL_BGR_EXT;
}
else {
printf("warning: the image is not truecolor.. this will probably break\n");
// this error should not go unhandled
}
SDL_SetAlpha(surface, 0, 0);
// Have OpenGL generate a texture object handle for us
glGenTextures( 1, &texture );
// Bind the texture object
glBindTexture( GL_TEXTURE_2D, texture );
// Set the texture's stretching properties
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
// Edit the texture object's image data using the information SDL_Surface gives us
glTexImage2D( GL_TEXTURE_2D, 0, nOfColors, surface->w, surface->h, 0,
texture_format, GL_UNSIGNED_BYTE, surface->pixels );
}
else {
printf("SDL could not load the image: %s\n", SDL_GetError());
SDL_Quit();
exit(1);
}
if ( surface ) {
SDL_FreeSurface( surface );
}
return texture;
}
Thanks before hand for the help.
After all, avoid state changes. Combine all your tiles into one texture and render using only one glBegin/glEnd block.
If you don't want to make many changes try display lists. OpenGL will be able to optimize your calls but there is no guarantee it will run much faster.
If your map doesn't change a lot use VBOs. It's the fastest way.