Unable to display NPOT texture with OpenGL - opengl

I writing a C++ simple application, to figure out why i can't display any NPOT texture using "GL_Texture_2D" as the texture_target. At some point, I'll need to generate some Mipmaps, so
using rectangle texture isn't an option.
I'm running Win7 Pro (x64), Intel i7-2600K CPU, 8GB ram, NVIDIA Quadro 600. My quadro's driver is 296.35, which support OpenGL 4.2.
1 Working version
When using the "default" texture, it runs smoothly and display any NPOT texture.
glBindTexture( target = GL_TEXTURE_2D, texture = 0 )
2 Broken version
Since, I'll need more that 1 texture, I need to get their "name".
As soon has we're trying to bind a named texture, calling glGenTexture and using that that texture into the glBindTexture, all I get is a white rectangle.
glGenTextures(n = 1, textures = &2)
glBindTexture(target = GL_TEXTURE_2D, texture = 2)
I'm sure that my hardware supports NPOT texture, I've checked openGL extensions and "GL_ARB_texture_non_power_of_two" is listed. Also, I'm using and instead of the typical windows version of .
I've also asked this question on the nvidia forums and in this post you'll find both log files

GL_TEXTURE_MIN_FILTER defaults to GL_NEAREST_MIPMAP_LINEAR.
Upload mipmaps or switch to GL_NEAREST/GL_LINEAR.

Related

GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT when using texture with internal format GL_R16_SNORM

I am rendering 16bit grayscale images using OpenGL and need to fetch rendered data to buffer without bit depth decimation. My code works perfectly on Intel(R) HD Graphics 4000, but on nVidia graphic cards (NVIDIA Corporation GeForce G 105M/PCIe/SSE2, NVIDIA Corporation GeForce GT 640M LE/PCIe/SSE2) it fails with status GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT when I try to render to texture with internal format GL_R16_SNORM. It works when I use GL_R16 format, but I also loose all negative values of rendered texture. With Intel HD Graphics it works with both GL_R16_SNORM and GL_16 values.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16_SNORM, width, height, 0, GL_RED, GL_SHORT, null);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferId);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
glDrawBuffers(1, new int[] {GL_COLOR_ATTACHMENT0}, 0);
int status = gl.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER);
What is reason of GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT status on nVidia cards? Can you advise how to debug this situation? (unfortunately glGetError()==GL_NO_ERROR)
EDIT: After further testing I found out it work also on ATI graphics cards. As Reto Koradi answered, support in OpenGL version pre 4.4 is manufacturer specific and it seems it works on ATI and Intel cards but not on nVidia cards.
GL_R16_SNORM only became a required color-renderable format with OpenGL 4.4.
If you look at the big texture format table for example in the 4.3 spec (starting on page 179), the "color-renderable" field is checked for R16_SNORM. But then on page 178, under "Required Texture Formats", R16_SNORM is listed as a "texture-only color format". This means that while implementations can support rendering to this format, it is not required.
In the 4.4 spec, the table (starting on page 188) basically says the same. R16_SNORM has a checkmark in the column "CR", which means "color-renderable". But the rules were changed. In 4.4 and later, all color-renderable formats have to be supported by implementations. Instead of containing lists, the "Required Texture Formats" section on page 186 now simply refers to the table. This is also mentioned in the change log section of the spec.
This means that unless you require at least OpenGL 4.4, support for rendering to this format is indeed vendor/hardware dependent.

glClear() gives GL_OUT_OF_MEMORY on Intel HD 4000 (GL 4.0) but not GeForce (GL 4.2)... why?

Now, this is an extremely odd behavior.
TL;DR -- in a render-to-texture setup, upon resizing the window (framebuffer 0) only the very next call to glClear(GL_COLOR_BUFFER_BIT) for bound framebuffer 0 (the window's client area) gives GL_OUT_OF_MEMORY, only on one of two GPUs, however rendering still proceeds properly and correctly.
Now, all the gritty details:
So this is on a Vaio Z with two GPUs (that can be switched-to with a physical toggle button on the machine):
OpenGL 4.2.0 # NVIDIA Corporation GeForce GT 640M LE/PCIe/SSE2 (GLSL: 4.20 NVIDIA via Cg compiler)
OpenGL 4.0.0 - Build 9.17.10.2867 # Intel Intel(R) HD Graphics 4000 (GLSL: 4.00 - Build 9.17.10.2867)
My program is in Go 1.0.3 64-bit under Win 7 64-bit using GLFW 64-bit.
I have a fairly simple and straightforward render-to-texture "mini pipeline". First, normal 3D geometry is rendered with the simplest of shaders (no lighting, nothing, just textured triangle meshes which are just a number of cubes and planes) to a framebuffer that has both a depth/stencil renderbuffer as depth/stencil attachment and a texture2D as color attachment. For the texture all filtering is disabled as are mip-maps.
Then I render a full-screen quad (a single "oversized" full-screen tri actually) just sampling from said framebuffer texture (color attachment) with texelFetch(tex, gl_FragCoord.xy, 0) so no wrapping is used.
Both GPUs render this just fine, both when I force a core profile and when I don't. No GL errors are ever reported for this, all renders as expected too. Except when I resize the window while using the Intel HD 4000 GPU's GL 4.0 renderer (both in Core profile and Comp profile). Only in that case, a single resize will record a GL_OUT_OF_MEMORY error directly after the very next glClear(GL_COLOR_BUFFER_BIT) call on framebuffer 0 (the screen), but only once after the resize, not in every subsequent loop iteration.
Interestingly, I don't even actually do any allocations on resize! I have temporarily disabled ALL logic occuring on window resize -- that is, right now I simply fully ignore the window-resize event, meaning the RTT framebuffer and its depth and color attachment resolutions are not even changed/recreated. Meaning the next glViewPort will still use the same dimensions as when the window and GL context was first created, but anyhoo the error occurs on glClear() (not before, only after, only once -- I've double- and triple-checked).
Would this be a driver bug, or is there anything I could be doing wrongly here?
Interesting glitch in the HD's GL driver, it seems:
When I switched to the render-to-texture setup, I also set the depth/stencil bits at GL context creation for the primary framebuffer 0 (ie. the screen/window) both to 0. This is when I started seeing this error and framerate became quite sluggish compared to before.
I experimentally set the (strictly speaking unneeded) depth-bits to 8 and both of these issues are gone. So seems like the current HD 4000 GL 4.0 driver version just doesn't like a value of 0 for its depth buffer bits at GL context creation.

Android OpenGL ES 2.0 : Can a GL_FLOAT texture be assigned to a FBO as a COLOR attachment?

I want to get the value using GL_FLOAT texture by glReadPixels.
My Android device support OES_texture_float. but, it became an error to attach GL_FLOAT texture.
In OpenGL ES 2.0 in Android, to attach GL_FLOAT texture to FBO is impossible? Or depend on hardware?
Part of my code is:
Init:
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D,texture);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,texWidth,texHeight,0,GL_RGB,GL_FLOAT,NULL);
FBO Attach:
glBindFramebuffer(GL_FRAMEBUFFER,framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,texture,0);
checkGlError("FBO Settings");
// glGetError() return 0x502.
status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
// glCheckFramebufferStatus() return 0.
If anyone has some insight i'd ppreciate it.
The unextended OpenGL ES 2.0 does not allow FBOs of this type, however there are some extensions (and some mobile GPUs) that supports floating point buffers. Take a look at GL_OES_texture_float and GL_NV_fbo_color_attachments.
nVidia Tegra 3 supports floating point FBOs.
P.S. With Tegra 2 it also seems to be possible: http://forums.developer.nvidia.com/devforum/discussion/1576/tegra-2-slow-floating-point-texture-operations/p1

Bad rendering with GL_TEXTURE_MIN_FILTER GL_LINEAR

I'm writing an application that uses a GLSL fragment shader to do some color conversion to RGB. This application uses GL_TEXTURE_RECTANGLE_ARB because I need to support NPOT textures.
The problem happens when a 1280x720 image is rendered to a smaller surface, say 640x480.
Apparently, my ATI Technologies Inc RV610 video device [Radeon HD 2400 PRO] has problems performing minification filtering with GL_LINEAR:
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
The red rectangle in the image below shows the exact location of the problem. Green-ish lines (4 or 5 pixels of height) are being rendered at the top of the video (black borders around the image are not part of the rendering, ok?!). Depending on the image being rendered, the color of lines change as well:
With zoom:
The problem doesn't happens with:
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
I've tested this application on another PC with an Intel card, and it also happens with an Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03), regardless of the filtering mode being set. Coincidence?
Am I forgetting to do something in the code, could this be a driver issue? I have several machines with the same Intel card and the problem occurs in all of them. It's important to state that this issue doesn't happen on a NVIDIA GeForce 7300.
Looking at the sourcecode I see the following in the shader:
float CbY = ImgHeight + floor(t.y / 4.0);
float CrY = ImgHeight + chromaHeight_Half + floor(t.y / 4.0);
I have no idea, why you add ImgHeight to the texture coordinates, because all it'd do was wrap around if that's the texture height. Then you're packing the different color components into a single texture. So you must take extra care to correctly calculate offsets. That one pixel high line colored off is a indication, that your texture coordinates are wrong. GL_NEAREST coerces to integers, but with GL_LINEAR they must match. I suggest replacing floor with round.

Render cairo surface directly to OpenGL texture

I'm using cairo (http://cairographics.org) in combination with an OpenGL based 3D graphics library.
I'm currently using the 3D library on Windows, but I'm hoping to receive an answer that is platform independent.
This is all done in c++.
I've got the straight forward approach working which is to use cairo_image_surface_create in combination with glTexImage2D to get an OpenGL texture.
However, from what I've been able to gather from the documentation cairo_image_surface_create uses a CPU-based renderer and writes the output to main memory.
I've come to understand cairo has a new OpenGL based renderer which renders its output directly on the GPU, but I'm unable to find concrete details on how to use it.
(I've found some details on the glitz-renderer, but it seems to be deprecated and removed).
I've checked the surface list at: http://www.cairographics.org/manual/cairo-surfaces.html, but I feel I'm missing the obvious.
My question is: How do I create a cairo surface that renders directly to an OpenGL texture?
Do note: I'll need to be able to use the texture directly (without copying) to display the cairo output on screen.
As of 2015, Cairo GL SDL2 is probably the best way to use Cairo GL
https://github.com/cubicool/cairo-gl-sdl2
If you are on an OS like Ubuntu where cairo is not compiled with GL you will need to compile your own copy and let cairo-gl-sdl2 know where it is.
The glitz renderer has been replaced by the experimental cairo-gl backend.
You'll find a mention of it in:
http://cairographics.org/OpenGL/
Can't say if it's stable enough to be used though.
Once you have a gl backend working, you can render into a Framebuffer Object to render directly in a given texture.
I did it using GL_BGRA.
int tex_w = cairo_image_surface_get_width(surface);
int tex_h = cairo_image_surface_get_height(surface);
unsigned char* data = cairo_image_surface_get_data(surface);
then do
glTexImage2D(GL_TEXTURE_2D, 0, 4, tex_w,tex_h, 0,GL_BGRA, GL_UNSIGNED_BYTE, data);
when you create the texture.
Use glTexSubImage2D(...) to update th texture when the image content changes. For speed
set the filters for the texture to GL_NEAREST