OpenGL Framebuffer not complete on Nvidia when using Cube Map Textures - opengl

Edit: I narrowed down the cause and simplified the problem
I noticed, when playing around with OpenGL layered rendering, that Nvidia drivers/GPUs have trouble with framebuffers that have cube map textures bound to them. Intel IGPUs has no problem with this but when I switch over to my dedicated GPU, I get an error
The code below recreates the error:
GLuint environment = 0; glCreateTextures(GL_TEXTURE_CUBE_MAP, 1, &environment);
constexpr GLsizei resolution = 512;
glTextureStorage2D(environment, 1, GL_RGB32F, resolution, resolution);
glTextureParameteri(environment, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTextureParameteri(environment, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTextureParameteri(environment, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTextureParameteri(environment, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTextureParameteri(environment, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTextureParameteri(environment, GL_TEXTURE_BASE_LEVEL, 0);
glTextureParameteri(environment, GL_TEXTURE_MAX_LEVEL, 0);
GLuint capture_fbo = 0; glCreateFramebuffers(1, &capture_fbo);
GLuint capture_rbo = 0; glCreateRenderbuffers(1, &capture_rbo);
glNamedRenderbufferStorage(capture_rbo, GL_DEPTH_COMPONENT32F, resolution, resolution);
glNamedFramebufferTexture(capture_fbo, GL_COLOR_ATTACHMENT0, environment, 0);
glNamedFramebufferRenderbuffer(capture_fbo, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, capture_rbo);
if (glCheckNamedFramebufferStatus(capture_fbo, GL_DRAW_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE)
{
fmt::print("Framebuffer Complete!\n");
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, capture_fbo);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
else fmt::print("Framebuffer Not Complete!\n");
When I query the framebuffer after trying to run the code above, it says it isn't complete until I unbind/rebind the framebuffer. After which it says it's complete again, even though nothing has changed.
The console output on Nvidia:
Manufacturer: NVIDIA Corporation
GPU: GeForce GTX 1060 with Max-Q Design/PCIe/SSE2
OpenGL Version: 4.5.0 NVIDIA 416.34
GLSL Version: 4.50 NVIDIA
Framebuffer Complete!
OpenGL [API Error 1286] (High): GL_INVALID_FRAMEBUFFER_OPERATION error generated. Operation is not valid because a bound framebuffer is not framebuffer complete.
Console output on Intel:
Manufacturer: Intel
GPU: Intel(R) UHD Graphics 630
OpenGL Version: 4.5.0 - Build 23.20.16.4973
GLSL Version: 4.50 - Build 23.20.16.4973
Framebuffer Complete!
Am I encountering a bug? Since it works on one vendor but not another, is there some special specification for Nvidia framebuffers?

Cubemap textures effectively have 6 layers. So calling glNamedFramebufferTexture will attach the cubemap as a layered image. And for framebuffers to be complete, either all attached images are layered, or none of them are. Renderbuffer images are never layered, so you have a framebuffer which should not be complete.
So while glCheckNamedFramebufferStatus should not have returned "complete", NVIDIA is closer to being correct than Intel (not a surprise).

Related

OpenGL Invalid Texture or State

We are developing a C++ plug-in within an OpenGL application. The application will call a "render" method on our plug-in as necessary. While rendering our textures, we noticed that sometimes some of the textures are drawn completely white even though they are created with valid data. It appears to be random about which texture and when. While investigating what could cause some of the textures to render white, I noticed that simply trying to retrieve the size of a texture (even for the ones that render correctly) doesn't work. Here is the simple code to create the texture and retrieve its size:
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth, imageHeight, 0,
GL_BGRA, GL_UNSIGNED_BYTE, imageDdata);
// try to lookup the size of the texture
int textureWidth = 0, textureHeight = 0;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &textureWidth);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &textureHeight);
glBindTexture(GL_TEXTURE_2D, 0);
The image width and height input to glTexImage2D are 1536 x 1536, however the values returned from glGetTexLevelParameter are 16384 x 256. In fact, any width and height image that I pass to glTexImage2D result in an output of 16384 x 256. If I pass a width and height of 64 x 64, I still get back 16384 x 256.
I am using the same simple texture load/render code in another standalone test application and it works correctly all the time. However, I get these white textures when I use the code within this larger application. I have also verified that glGetError() returns 0.
I am assuming the containing application is setting some OpenGL state that is causing problems when we try to render our textures. Do you have any suggestions for things to check that could cause these white textures OR invalid texture dimensions?
Update
Both my test application that renders correctly and the integrated application that doesn't render correctly are running within a VM on Windows 7 with Accelerated 3D Graphics enabled. Here is the VM environment:
CentOS 7.0
OpenGL 2.1 Mesa 9.2.5
Did you check that you've got a valid OpenGL context being active when this code is called? The values you get back may be uninitialized garbage left in the variables, which values don't get modified if glGetTexLevelParameter fails for some reason. Note that glGetErrors may return GL_NO_ERROR if there's no OpenGL context active.
To check if there's a OpenGL context use wglGetCurrentContext (Windows), glXGetCurrentContext (X11 / GLX) or CGLGetCurrentContext (MacOS X CGL) to query the active OpenGL context; if there's none active all of these functions will return NULL.
Just FYI: You should use GLint for retrieval of integer values from OpenGL, the reason for that is, the the OpenGL types have very specific sizes, which may differ from the primitive C types of the same name. For example a C unsigned int may vary between 16 to 64 bits in size, while a OpenGL GLuint always is fixed to 32 bits.
https://www.opengl.org/sdk/docs/man/docbook4/xhtml/glGetTexLevelParameter.xml
glGetTexLevelParameter returns the texture level parameters for the active texture unit.
Try glActiveTexture on all the units. See if you are getting default values.

Why is there just garbage data in texture layers beyond 2048?

I am trying to use a texture_2d_array with up to 8192 layers. But all layers after the 2048th just contain garbage data (tested by mapping the individual layers on a quad to visualize the texture).
Querying the maximum number of layers with
glGetIntegerv(GL_MAX_ARRAY_TEXTURE_LAYERS, &maxTexLayers);
returns 8192 for my graphics card (AMD 5770), the same for an AMD 7850er. My only other available graphics card is an NVidia 480, which supports just 2048 layers.
I use the following code to create the texture:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, tex);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAX_LEVEL, 7);
glTexStorage3D(GL_TEXTURE_2D_ARRAY, 8, GL_RGB8, 128, 128, 8192);
//glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGB, 128, 128, 8192, 0, GL_RGB, GL_UNSIGNED_BYTE, nullptr);
std::vector<char> image = readImage(testImagePath);
for (unsigned int i = 0; i < 8192; ++i)
{
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, i, 128, 128, 1, GL_RGB, GL_UNSIGNED_BYTE, image.data());
}
GLuint tLoc = glGetUniformLocation (program, "texArray");
glProgramUniform1i (program, tLoc, 0);
(here https://mega.co.nz/#!FJ0gzIoJ!Kk0q_1xv9c7sCTi68mbKon1gDBUM1dgjrxoBJKTlj6U you can find a cut-down version of the program)
I am out of ideas:
Changing glTexStorage3D to glTexImage3D - no change
Playing with the base/max level - no change
Min_Filter to GL_LINEAR - no change
generating mipmaps (glGenerateMipmaps) - no change
reducing the size of the layers to e.g. 4x4 - no change
reducing the number of layers to e.g. 4096 - no change
switching to an AMD 7850 - no change
enabling debug context - no errors
etc. and a lot of other stuff
So, it could be a driver bug with the driver reporting the wrong number for GL_MAX_ARRAY_TEXTURE_LAYERS, but maybe I missed something and one of you has an idea.
EDIT: I am aware that such a texture would use quite a lot of memory and even if my graphics card had that much available OpenGL does not guarantee that I can allocate it, but I am getting no errors with the debug context enabled, especially no OUT_OF_MEMORY and I also tried it with a size of 4x4 per layer, which would be just 512kb
2 things:
1) Wouldn't 8192 textures that are 128x128 be over 500MBs of data? (Or 400MB if it's RGB instead of RGBA.) It could be that OpenGL can't allocate that much memory, even if your card has that much, due to fragmentation or other issues.
2) Just because OpenGL says the max is 8192 (or larger) doesn't mean that you're guaranteed to be able to use that much in every case. For example, my driver claims that the card can handle a max texture size of 8192 on a side. But if I try to create an 4096x4096 image that's 32-bit floating point RGBA, it fails, even though it's only 268MB and I have a Gig of VRAM.
Does glGetError() return any errors?

OpenGL shader ignore texture

I have recently implemented Awesomium into a OpenGL application.
When I load Awesomium in to a texture OpenGL includes it in its shading process regardless of whether I draw the texture onto a surface or not.
I am trying to trace down the line of code that is processing the texture into the shaders, is there a specific function OpenGL uses to access all textures or a way to tell OpenGL to ignore the texture?
Update texture block
glBindTexture(GL_TEXTURE_2D, SkypeHUD);
glTexImage2D(GL_TEXTURE_2D, 0, 4, AwesomiumW, AwesomiumH, 0, GL_BGRA, GL_UNSIGNED_BYTE, surface->buffer());
Create texture block
glBindTexture(GL_TEXTURE_2D, SkypeHUD);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glBindTexture(GL_TEXTURE_2D, 0);
Drawing the scene without the texture being loaded: http://puu.sh/2bVTV
Drawing the scene after I have loaded the texture: http://puu.sh/2bVUb
You can see it blending the google texture in over the others.
Texture enable/disable should be controlled by the shader code, not some client binding state. Anyway, you most likely use several texture units (glActiveTexture); the texture binding is individual to each unit, so you'll have to do some leg work and unbind textures from each unit if you want to go this route.

Segfault in glGenFramebuffers

I'm getting segfaults and can't figure out why. The person I'm working with compiles and runs correctly on an OSX machine. gdb backtrace gives me that it's coming from this section of code, specifically, from glGenFramebuffers:
// Render the warped texture mapped triangles to framebuffer
GLuint myFBO;
GLuint myTexture;
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &myTexture);
glBindTexture(GL_TEXTURE_2D, myTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, size.width, size.height, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glGenFramebuffers(1, &myFBO);
glBindFramebuffer(GL_FRAMEBUFFER_EXT, myFBO);
I'm running 12.04 Ubuntu with an Nvidia card using the latest proprietary drivers from Nvidia provided by the OS. I'm not incredibly familiar with OpenGL, a lot of this code is my partner's, and he seems to be stumped as well. If you need any further information, I'm happy to provide it.
The answer actually turned out to be really simple. OSX coders don't need to call glewInit() before they start using glew calls - Linux and Windows users do. Also, another bit of interesting information I found out: Check if you're able to perform direct rendering using glxinfo. It can make all the difference when running OpenGL programs.

iPhone OpenGL ES incorrect alpha blending

I have a problem with incorrect alpha blending results with openGL ES on iPhone.
This is my code for creating texture object:
glGenTextures(1, &tex_name);
glBindTexture(GL_TEXTURE_2D, tex_name);
glTextImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tex_width, tex_height, GL_RGBA, GL_UNSIGNED_BYTE, tex_data);
'tex_data' is loaded from raw RGBA8888 data packed with zlib. It loads as it should, wich i've checked with a debugger.
This is my code for setting up texture before rendering:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, tex_name);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
I've uploaded a sample of what I've expected and what I've got here: sample . In the sample most of the texture in the bottom is pitch-black with 70% opacity. However openGL renders it as gray. This problem affects all of my textures I use blend with.
I've tested the code on windows with use of OGLES PVRVFrame and the results are as expected: black is rendered as black.
Found the problem. I've forgot to set opaque property of CAEAGLLayer of EAGLView to YES.
Will this help? glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) I think this just blends the two instead of blending both against the background.
Sorry if I don't understand.