Segfault in glGenFramebuffers - c++

I'm getting segfaults and can't figure out why. The person I'm working with compiles and runs correctly on an OSX machine. gdb backtrace gives me that it's coming from this section of code, specifically, from glGenFramebuffers:
// Render the warped texture mapped triangles to framebuffer
GLuint myFBO;
GLuint myTexture;
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &myTexture);
glBindTexture(GL_TEXTURE_2D, myTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, size.width, size.height, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glGenFramebuffers(1, &myFBO);
glBindFramebuffer(GL_FRAMEBUFFER_EXT, myFBO);
I'm running 12.04 Ubuntu with an Nvidia card using the latest proprietary drivers from Nvidia provided by the OS. I'm not incredibly familiar with OpenGL, a lot of this code is my partner's, and he seems to be stumped as well. If you need any further information, I'm happy to provide it.

The answer actually turned out to be really simple. OSX coders don't need to call glewInit() before they start using glew calls - Linux and Windows users do. Also, another bit of interesting information I found out: Check if you're able to perform direct rendering using glxinfo. It can make all the difference when running OpenGL programs.

Related

OpenGL texture gets worse when moving camera away from object

The texture gets generally worse more I move the camera away from the object. I need to be really close to the object for the texture to be fine. Does anyone know what would cause it and how to fix it?
this is how I load the texture
stbi_set_flip_vertically_on_load(1);
m_Local_buffer = stbi_load(path.c_str(), &m_width, &m_height, &m_bpp, 4);
glGenTextures(1, &m_rendererID);
glBindTexture(GL_TEXTURE_2D, m_rendererID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_Local_buffer);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
image of the bad texture
Edit: updated the code.
For mipmapping you have to use one of the mipmap minifying filters. e.g.: GL_LINEAR_MIPMAP_LINEAR (see glTexParameter):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
Problem was that the faces in the model were too close to each other which caused Z-fighting and that caused flickering. It was not caused by texture or my or by OpenGL code like I originally assumed. When I changed the model that had not this issue it worked correctly without any fliggering.

Framebuffer Incomplete Attachment for Texture with Internal Format?

So i am trying to create a FrameBuffer where i render to a texture, but i can't seem to get it to work with the format i need. That is GL_RGB32F. It works for GL_RGB16F and GL_RGBA32F so i don't understand why GL_RGB32F is giving me GL_FRAMEBUFFER_INCOMEPLETE_ATTACHMENT from glCheckFramebufferStatus. I get no errors from the calls creating the texture either. Is there a special requirement to use that internal format? Can i see if i have support for it?
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glGenTextures(1, &positionTexture);
glBindTexture(GL_TEXTURE_2D, positionTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, width, height, 0, GL_RGB, GL_FLOAT, nullptr);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, positionTexture, 0);
GL_RGB16F andGL_RGB32F are not required color buffer formats in the GL, while GL_RGBA32F is. You can have a look in Table 8.12 of the OpenGL 4.5 core profile specification (pages 198-200, assuming june 2017 revision of said document). It tells you that all these mentioned formats are color renderable but only GL_RGBA32F out of that set is a required render format. Implementations might support the other ones optionally, but you can't rely on that.

glGenerateMipmap fails if not followed by a glGetError... wait, what?

I've been having a very weird issue with my project's texture generation. The first mipmapped texture works flawlessly but the next ones only draw the first level. While debugging I suddenly came to a hack that fixes it:
glGenTextures(1, &texture->textureID);
glBindTexture(GL_TEXTURE_2D, texture->textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexStorage2D(GL_TEXTURE_2D, 10, GL_RGBA8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
glGenerateMipmap(GL_TEXTURE_2D);
assert(glGetError() == GL_NO_ERROR); // Mipmapping fails if glGetError is not here
glBindTexture(GL_TEXTURE_2D, 0);
Why on earth is this only working when a glGetError (which, as you can see with this assert, is ALWAYS returning GL_NO_ERROR) function is called after the glGenerateMimap? Why does it have anything to do with it?
I'm currently using a GeForce GTX 670 with the latest GeForce 340.52 driver
Edit: A couple of images might help
With glGetError():
Without glGetError():
Referring to Is iOS glGenerateMipmap synchronous, or is it possibly asynchronous?, it seems glGenerateMipmaps works asynchronously. My project uses shared contexts to create shaders, textures and meshes (sorry if I didn't mention this, I didn't think it would matter).
The thing is, whenever the texture generation finished the "textures generated" flag was risen and the shared context would get destroyed, it seems the last glGenerateMipmap was therefore not getting flushed through the pipeline. The call to glGetError needs to flush the operations from the pipeline to see whether there's been any error to report, and this is exactly why it was making everything work flawlessly.
So, in other words, if you're doing something on a separate, shared context, you need to explicitly call glFinish before killing that thread, or some operations will be left undone.

Bind Texture in Repeat Mode

I am trying to store an octree in a 3D texture in OpenGL for use on the GPU using Cg, from a chapter in GPU Gems 2 found here http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter37.html. However the results I am getting are incorrect. I think this is because of how I create the octree.
In the appendix of that chapter, it says "If we bind the indirection pool texture (octree texture) in repeat mode (GL_REPEAT)...".
Does this simply mean set the the filters and wrapping to repeat, or do I need to do something else? This is my code so far
glGenTextures(1, &octree_texture);
glBindTexture(GL_TEXTURE_3D, octree_texture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_REPEAT);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, WIDTH, HEIGHT, DEPTH, 0, GL_RGBA, GL_UNSIGNED_BYTE, octreeData);
Thanks for the help :)
The filters can't be repeat, that will generate a GL error, only the wrap mode can be GL_REPEAT, and that's probably what the book means.

OpenGL texture randomly not shown

I have got a very, very strange problem in my C++ OpenGL application.
I simply load a texture and apply it to a quadric:
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Then
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
gluQuadricDrawStyle(quad,GLU_FILL);
gluQuadricTexture(quad,GL_TRUE);
gluCylinder(quad,1,0,2,20,1);
glDisable(GL_TEXTURE_2D);
Now: it works perfectly 9 times out of ten, but sometimes the texture isn't shown (the quadric stays white).
The image is correctly loaded, so the problem should be with OpenGL. I have tried with several different images too. Always GL_NO_ERROR.
Any idea ? It is driving me crazy...
Found :) It was the GLint texture member that wasn't correctly reallocated in the copy constructor.
However, i still don't understand why it worked sometimes...
The code you are using seems valid. Have you ...
tried to use a simple quad instead of the quadric
assured that image is filled correctly
verified that tex is not altered somewhere else
assured that no other programs are using opengl at the same time
restarted your computer ;)