So, I've been reading about this, and I still haven't found a conclusion. Some examples use textures as their render targets, some people use renderbuffers, and some use both!
For example, using just textures:
// Create the gbuffer textures
glGenTextures(ARRAY_SIZE_IN_ELEMENTS(m_textures), m_textures);
glGenTextures(1, &m_depthTexture);
for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures) ; i++) {
glBindTexture(GL_TEXTURE_2D, m_textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0);
}
both:
glGenRenderbuffersEXT ( 1, &m_diffuseRT );
glBindRenderbufferEXT ( GL_RENDERBUFFER_EXT, m_diffuseRT );
glRenderbufferStorageEXT ( GL_RENDERBUFFER_EXT, GL_RGBA, m_width, m_height );
glFramebufferRenderbufferEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, m_diffuseRT );
glGenTextures ( 1, &m_diffuseTexture );
glBindTexture ( GL_TEXTURE_2D, m_diffuseTexture );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
// Attach the texture to the FBO
glFramebufferTexture2DEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, m_diffuseTexture, 0 );
What's the difference? What's the point of creating a texture, a render buffer, and then assign one to the other? After you successfully supply a texture with an image, it's got its memory allocated, so why does one need to bind it to a render buffer?
Why would one use textures or renderbuffers? What would be the advantages?
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
EDIT:
So, my current code for a GBuffer is this:
enum class GBufferTextureType
{
Depth = 0,
Position,
Diffuse,
Normal,
TexCoord
};
.
.
.
glGenFramebuffers ( 1, &OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
glBindFramebuffer ( GL_FRAMEBUFFER, OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
uint32_t TextureGLIDs[5];
glGenTextures ( 5, TextureGLIDs );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
// Create the depth texture
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, In_Dimensions.x, In_Dimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth], 0 );
// Create the color textures
for ( unsigned cont = 1; cont < 5; ++cont )
{
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[cont] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGB32F, In_Dimensions.x, In_Dimensions.y, 0, GL_RGB, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + cont, GL_TEXTURE_2D, TextureGLIDs[cont], 0 );
}
// Specify draw buffers
GLenum DrawBuffers[4];
for ( unsigned cont = 0; cont < 4; ++cont )
DrawBuffers[cont] = GL_COLOR_ATTACHMENT0 + cont;
glDrawBuffers ( 4, DrawBuffers );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
GLenum Status = glCheckFramebufferStatus ( GL_FRAMEBUFFER );
if ( Status != GL_FRAMEBUFFER_COMPLETE )
{
Delete();
return false;
}
Dimensions = In_Dimensions;
// Unbind
glBindFramebuffer ( GL_FRAMEBUFFER, 0 );
Is this the way to go?
I still have to write the corresponding shaders...
What's the point of creating a texture, a render buffer, and then assign one to the other?
That's not what's happening. But that's OK, because that second example code is errant nonsense. The glFramebufferTexture2DEXT is overriding the binding from glFramebufferRenderbufferEXT. The renderbuffer is never actually used after it is created.
If you found that code online somewhere, I strongly advise you to disregard anything that source told you about OpenGL development. Though I would advise that anyway, since it's using the "EXT" extension functions in 2016, almost a decade since core FBOs became available.
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
That is entirely the point of them: you use a renderbuffer for images that you don't want to read from. That's not useful for deferred rendering, since you really do want to read from them.
But imagine if you're generating a reflection image of a scene, which you will later use as a texture in your main scene. Well, to render the reflection scene, you need a depth buffer. But you're not going to read from that depth buffer (not as a texture, at any rate); you need a depth buffer for depth testing. But the only image you're going to read from after is the color image.
So you would make the depth buffer a renderbuffer. That tells the implementation that the image can be put into whatever storage is most efficient for use as a depth buffer, without having to worry about read-back performance. This may or may not have a performance impact. But at the very least, it won't be any slower than using a texture.
Most rendering scenarios need a depth and/or stencil buffer, though it is rare that you would ever need to sample the data stored in the stencil buffer from a shader.
It would be impossible to do depth/stencil tests if your framebuffer did not have a location to store these data and any render pass that uses these fragment tests requires a framebuffer with the appropriate images attached.
If you are not going to use the depth/stencil buffer data in a shader, a renderbuffer will happily satisfy storage requirements for fixed-function fragment tests. Renderbuffers have fewer format restrictions than textures do, particularly if we detour this discussion to multisampling.
D3D10 introduced support for multisampled color textures but omitted multisampled depth textures; D3D10.1 later fixed that problem and GL3.0 was finalized after D3D10's initial design oversight was corrected.
Pre-GL3 / D3D10.1 design would manifest itself in GL as a multisampled framebuffer object that allows either texture or renderbuffer color attachments but forces you to use a renderbuffer for the depth attachment.
Renderbuffers are ultimately the lowest common denominator for storage, they will get you through tough jams on feature-limited hardware. You can actually blit the data stored in a renderbuffer into a texture in some situations where you could not draw directly into the texture.
To that end, you can resolve a multisampled renderbuffer into a single-sampled texture by blitting from one framebuffer to another. This is implicit multisampling, and it (would) allow you to use the anti-aliased results of a previous render pass with a standard texture lookup. Unfortunately it is thoroughly useless for anti-aliasing in deferred shading--you need explicit multisample resolve for that.
Nonetheless, it is incorrect to say that a renderbuffer is not readable; it is in every sense of the word, but since your goal is deferred shading, would require additional GL commands to copy the data into a texture.
Related
I've written some code that adds a UI overlay to an existing OpenGL application.
Unfortunately, I am not proficient in OpenGL but I do know that it somehow always manages to fail on some device.
Some backstory:
The general pipeline is:
Application renders -> UI is rendered to FBO -> FBO is blitted to obtain an RGBA texture -> Texture is drawn on top of the UI scene.
So far so good, then I encountered an issue on Intel cards on Ubuntu 16.04 where the texture broke between context switches (UI rendering is done using a QOpenGLContext, application is raw OpenGL context managed by OGRE but the QOpenGLContext is set to share resources). I solved that issue by checking if sharing works (create a texture in one context and check if the content is correct in the other) and if not, I load the content while still in context B and upload it again in context A.
However, on Ubuntu 18.04 on the same machine for some reason, this will actually work. The texture is still correct in the other context when retrieving its content with glGetTexImage.
Now here's the problem:
It's not being rendered. I get just the application scene without anything on top but if I manually enable the workaround of downloading it and reuploading to a texture that was created in the application context, it works.
How can it be that the texture's content is alright but it won't show unless I grab it using glGetTexImage and re-upload it to a texture created in the other context using glTexImage2D?
There's gotta be some state that is invalid and set correct when using glTexImage2D.
Here's the code after the UI is rendered:
if (!checked_can_share_texture_)
{
qopengl_wrapper_->drawInvisibleTestOverlay();
}
qopengl_wrapper_->finishRender();
if ( !can_share_texture_ || !checked_can_share_texture_ )
{
glBindTexture( GL_TEXTURE_2D, qopengl_wrapper_->texture());
glGetTexImage( GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixel_data_ );
}
glBindTexture( GL_TEXTURE_2D, 0 );
qopengl_wrapper_->doneCurrent(); // Makes the applications context current again
glDisable( GL_DEPTH_TEST );
glDisable( GL_CULL_FACE );
glDisable( GL_LIGHTING );
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
glUseProgram(shader_program_);
glUniform1i(glGetUniformLocation(shader_program_, "tex"), 0);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer_object_);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), nullptr);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
glActiveTexture( GL_TEXTURE0 );
glEnable( GL_TEXTURE_2D );
if ( can_share_texture_ )
{
glBindTexture( GL_TEXTURE_2D, qopengl_wrapper_->texture());
if ( !checked_can_share_texture_)
{
const int count = qopengl_wrapper_->size().width() * qopengl_wrapper_->size().height() * 4;
const int thresh = std::ceil(count / 100.f);
unsigned char content[count];
glGetTexImage( GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, content);
int wrong = 0;
// can_share_texture_ = false; // Bypassing the actual check will make it work
for (int i = 0; i < count; ++i) {
if (content[i] == pixel_data_[i]) continue;
if (++wrong < thresh) continue;
can_share_texture_ = false;
LOG(
"OverlayManager: Looks like texture sharing isn't working on your system. Falling back to texture copying." );
// If we can't share textures, we have to generate one
glActiveTexture( GL_TEXTURE0 );
glGenTextures( 1, &texture_ );
glBindTexture( GL_TEXTURE_2D, texture_ );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
break;
}
if (can_share_texture_)
{
delete pixel_data_;
pixel_data_ = nullptr;
LOG("Texture sharing seems supported. Count: %d", count);
}
checked_can_share_texture_ = true;
}
}
else
{
glBindTexture( GL_TEXTURE_2D, texture_ );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, qopengl_wrapper_->size().width(), qopengl_wrapper_->size().height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, pixel_data_ );
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glUseProgram(0);
Vertex Shader
#version 130
in vec3 pos;
in vec2 coord;
out vec2 texCoord;
void main()
{
gl_Position = vec4(pos, 1.0); //Just output the incoming vertex
texCoord = coord;
}
Fragment Shader
#version 130
uniform sampler2D tex;
in vec2 texCoord;
void main()
{
gl_FragColor = texture(tex, texCoord);
}
TL;DR Texture isn't rendered (completely transparent) but if I copy it to memory using glGetTexImage it looks fine and if I copy that back to a texture created on the application context, it will render fine.
Graphics card is an Intel UHD 620 with Mesa version 18.2.8.
Edit: In case it wasn't clear, I'm copying the texture in the context of the application not the texture's original context, the same context where the working texture is created, so, if sharing didn't work I shouldn't get the correct content at that point.
I'm stydying framebuffers and I've made a mirror in my scene. It works fine except the depth testing. Got stuck trying to make it work. (when rendering to default frame buffer - depth testing works fine). Would appreciate any help. Here is the code:
glEnable( GL_DEPTH_TEST );
glViewport( 0, 0, 512, 512 );
unsigned int fbo;
glGenFramebuffers( 1, &fbo );
glBindFramebuffer( GL_FRAMEBUFFER, fbo );
unsigned int rbo;
glGenRenderbuffers( 1, &rbo );
glBindRenderbuffer( GL_RENDERBUFFER, rbo );
glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH, 512, 512 );
glBindRenderbuffer( GL_RENDERBUFFER, 0 );
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, rbo ); //if remove this, mirror works but without depth test
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D,
this->mirror->texturePack[0]->textureId(), 0 );
//render scene from mirror camera
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
glViewport( 0, 0, this->width(), this->height() );
//render scene from main camera
Your farme buffer is inclomplete, because GL_DEPTH is no valid internal format for a render buffer storage.
See glRenderbufferStorage. Try GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT32:
glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 512, 512 );
See OpenGL 4.6 core profile specification, 9.4 Framebuffer Completeness, page 323:
The internal formats of the attached images can affect the completeness of
the framebuffer, so it is useful to first define the relationship between the internal
format of an image and the attachment points to which it can be attached.
• An internal format is depth-renderable if it is DEPTH_COMPONENT or one
of the formats from table 8.13 whose base internal format is DEPTH_-
COMPONENT or DEPTH_STENCIL. No other formats are depth-renderable.
Note, the framebuffer completeness can be checked by:
glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE
I've solved it finally. I've added the glClear( GL_DEPTH_BUFFER_BIT ) command right after binding mirror framebuffer and after that it worked.
This was my understanding of basic steps to rendering to multiple textures.
1) Bind the shader locations to render at
m_uihDiffuseMap = glGetUniformLocation( m_iShaderProgramHandle, "diffuseMap" );
if( m_uihDiffuseMap != -1 )
glUniform1i( m_uihDiffuseMap, 0 );
m_uihNormalMap = glGetUniformLocation( m_iShaderProgramHandle, "normalMap" );
if( m_uihNormalMap != -1 )
glUniform1i( m_uihNormalMap, 1 );
2) Bind to what you want to render to
glBindFramebuffer( GL_FRAMEBUFFER, m_uifboHandle );
//diffuse texture binding
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, m_uiTextureHandle1, 0);
//normal texture binding
(or GL_COLOR_ATTACHMENT1)
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0+1, m_uiTextureHandle2, 0);
3) Clear the buffer & specify what buffers you want to draw to
glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, buffers);
4) Set your shader program for rendering
glUseProgram( m_uiShaderProgramHandle );
5) Pass variables to shader like our 2 different textures
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, uihDiffuseMap );
//or(GL_TEXTURE1)
glActiveTexture( GL_TEXTURE0+1 );
glBindTexture( GL_TEXTURE_2D, uihNormalMap );
6) Do render call things
//Draw stuff
7) set things back to default in case you have other render procedures using other things
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
glUseProgram( 0 );
------------------------------FRAGMENT SHADER-----------------------------------
In the fragment shader you have to output the 2 results like this right?
#version 330
in vec2 vTexCoordVary;
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
out vec4 fragColor[2];
void main( void )
{
fragColor[0] = texture( diffuseMap, vTexCoordVary );
fragColor[1] = texture( normalMap, vTexCoordVary );
};
I've double checked
-My diffuse texture and normal texture are loaded fine. If I pass my normal texture as the texture to use as TEXTURE0 it will show up.
-I get fragColor[0] just fine. When i show the fragColor[1] to the screen I got the same result as the first one. But i also hardcoded fragColor[1] to return solid grey inside the shader as a test case and it worked.
So my assumption is somehow when I pass my textures to the shader it assumes "normalMap" is "diffuseMap"? Its my only understanding to why I would get the same result in fragColor[0] and [1].
Yes, as of now the information how your samplers map to texture slots is missing, causing both to refer to the diffuse map. Use glUniform1i() to bind the index of the correct texture to each uniform slot.
It looks like step 1 and step 4 are in the wrong order. This is in step 1:
if( m_uihDiffuseMap != -1 )
glUniform1i( m_uihDiffuseMap, 0 );
if( m_uihNormalMap != -1 )
glUniform1i( m_uihNormalMap, 1 );
and this in step 4:
glUseProgram( m_uiShaderProgramHandle );
glUniform*() calls apply to the currently active program. So glUseProgram() must be called before the glUniform1i() calls.
It might also be a good idea to specifically set the location of the out variable:
layout(location = 0) out vec4 fragColor[2];
I don't think this is causing your problem, but I don't see anything in the spec saying that the linker assigns locations starting at 0 if they are not explicitly specified.
In case someone stumbles upon here via a search engine like me, my problem with writing to multiple textures was a typo. My framebuffer initialization code looked like this:
glGenFramebuffers(1, &renderer.gbufferFboId);
glBindFramebuffer(GL_FRAMEBUFFER, renderer.gbufferFboId);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, renderer.gbufferDepthTexId, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderer.gbufferColorPositionId, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, renderer.gbufferColorColorId, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, renderer.gbufferColorNormalId, 0);
opengl::fbo_check_current_status();
const GLenum drawBuffers[]
{
GL_COLOR_ATTACHMENT0,
GL_COLOR_ATTACHMENT1,
GL_COLOR_ATTACHMENT2
};
glDrawBuffers(1, drawBuffers);
Can you spot the error? It's a small typo in the glDrawBuffer call. It of course should be equivalent to the number of buffers:
glDrawBuffers(3, drawBuffers);
I'm having some weird memory issues in a C program I'm writing, and I think something related to my texture loading system is the cause.
The problem is that, depending on how many textures I make, different issues start coming up. Less textures tend to ever so slightly change other variables in the program. If I include all the textures I want to include, the program may spit out a host of different "* glibc detected *" type errors, and occasionally a Segmentation Fault.
The kicker is that occasionally, the program works perfectly. It's all the luck of the draw.
My code is pretty heavy at this point, so I'll just post what I believe to be the relevant parts of it.
d_newTexture(d_loadBMP("resources/sprites/default.bmp"), &textures);
Is the function I call to load a texture into OpenGL. "textures" is a variable of type texMan_t, which is a struct I made.
typedef struct {
GLuint texID[500];
int texInc;
} texMan_t;
The idea is that texMan_t encompasses all your texture IDs for easier use. texInc just keeps track of what the next available member of texID is.
This is d_newTexture:
void d_newTexture(imgInfo_t info, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &tex->texID[tex->texInc]);
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height, GL_RGBA, GL_UNSIGNED_BYTE, info.data );
tex->texInc++;
glDisable(GL_TEXTURE_2D);
}
I also use a function by the name of d_newTextures, which is identical to d_newTexture, except for that it splits up a simple sprite sheet into multiple textures.
void d_newTextures(imgInfo_t info, int count, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(count, &tex->texID[tex->texInc]);
for(int i=0; i<count; i++) {
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc+i]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height/count,
GL_RGBA, GL_UNSIGNED_BYTE, &info.data[info.width*(info.height/count)*4*i] );
}
tex->texInc+=count;
glDisable(GL_TEXTURE_2D);
}
What could be the cause of the issues I'm seeing?
EDIT: Recently, I've also been getting the error "* glibc detected out/PokeEngine: free(): invalid pointer: 0x01010101 **" after closing the program as well, assuming it's able to properly begin. The backtrace looks like this:
/lib/i386-linux-gnu/libc.so.6(+0x75ee2)[0xceeee2]
/usr/lib/nvidia-173/libGLcore.so.1(+0x277c7c)[0x109ac7c]
EDIT 2:
Here's the code for d_loadBMP as well. Hope it helps!
imgInfo_t d_loadBMP(char* filename) {
imgInfo_t out;
FILE * bmpFile;
bmpFile = fopen(filename, "r");
if(bmpFile == NULL) {
printf("ERROR: Texture file not found!\n");
}
bmp_sign bmpSig;
bmp_fHeader bmpFileHeader;
bmp_iHeader bmpInfoHeader;
fread(&bmpSig, sizeof(bmp_sign), 1, bmpFile);
fread(&bmpFileHeader, sizeof(bmp_fHeader), 1, bmpFile);
fread(&bmpInfoHeader, sizeof(bmp_iHeader), 1, bmpFile);
out.width = bmpInfoHeader.width;
out.height = bmpInfoHeader.height;
out.size = bmpInfoHeader.imageSize;
out.data = (char*)malloc(sizeof(char)*out.width*out.height*4);
// Loaded backwards because that's how BMPs are stored
for(int i=out.width*out.height*4; i>0; i-=4) {
fread(&out.data[i+2], sizeof(char), 1, bmpFile);
fread(&out.data[i+1], sizeof(char), 1, bmpFile);
fread(&out.data[i], sizeof(char), 1, bmpFile);
out.data[i+3] = 255;
}
return out;
}
The way you're loading BMP files is wrong. You're reading right into structs, which is very unreliable, because the memory layout your compiler chooses for a struct may vastly differ from the data layout in a file. Also your code contains zero error checks. If I had to make an educated guess I'd say this is where your problems are.
BTW. glEnable(GL_TEXTURE_…) enables a texture target as data source for rendering. It's completely unnecessary for just generating and uploading textures. You can omit the bracing glEnable(GL_TEXTURE_2D); … glDisable(GL_TEXTURE_2D) blocks in your loading code. Also I'd not use gluBuildMipmaps2D – it doesn't support arbitrary texture dimensions, and you're disabling mipmapping anyway – and just upload directly with glTexImage2D.
Also I don't get your need for a texture manager. Or at least not why your texture manager looks like this. A much better approach would be using a hash map file path → texture ID and a reference count.
I'm trying to write a class to do fragment shader chaining by using a Frame Buffer Object to Render To Texture with a frament shader, then render that texture to another texture with a fragment shader, etc. etc.
I am trying to deal with a memory leak right now, where when I resize my window and delete/reallocate the textures I am using, the textures are not being deleted properly.
Here is a code snippet:
//Allocate first texture
glGenTextures( 1, &texIds[0] );
glBindTexture( GL_TEXTURE_2D, texIds[0] );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, screenX, screenY, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
//Allocate second texture
glGenTextures( 1, &texIds[1] );
glBindTexture( GL_TEXTURE_2D, texIds[1] );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, screenX, screenY, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
//Try to free first texture -- ALWAYS FAILS
glDeleteTextures( 1, &texIds[0] );
//Try to free second texture
glDeleteTextures( 1, &texIds[1] );
When I run this with gDEBugger, it tells me "Warning: The debugged program delete a texture that does not exist. Texture name: 1" when I try to delete texIds[0]. (The reason I have them in an array right now is because I used to creating and free'ing them at the same time, however when you free 2 textures at once, it will fail silently on one and continue with the other).
If I don't create texIds[1], I can free texIds[0], but as soon as I create a second texture, I can no longer free the first texture I create. Any ideas?
Perhaps the error is in texIds array. Is it array of GLUint?
You could erroneously declare it as array of word, thus when generating texture for [1] element, uint pointer taken from element [0] is broken.