DXT Texture working despite S3TC not being supported - c++

The topic involves OpenGL ES 2.0.
I have a device that when queried on the extensions via
glGetString(GL_EXTENSIONS)
Returns a list of supported extensions, none of which is GL_EXT_texture_compression_s3tc .
AFAIK , not haveing GL_EXT_texture_compression_s3tc shouldn't allow using DXT compressed textures.
However, when DXT compressed texture are used on the device , they render without any problems.
Texture data is commited using glCompressedTexImage2D .
Tried for DXT1 , DXT3 and DXT5 .
Why does it work ? Is it safe to use a texture compression although the compression seems to not be supported ?

I think, that missing support for GL_EXT_texture_compression_s3tc does not mean, that you can't use compressed formats. They may be supported anyway.
Quote from glCompressedTexImage2D doc page for ES2:
The texture image is decoded according to the extension specification
defining the specified internalformat. OpenGL ES (...) provide a
mechanism to obtain symbolic constants for such formats provided by
extensions. The number of compressed texture formats supported can be
obtained by querying the value of GL_NUM_COMPRESSED_TEXTURE_FORMATS.
The list of specific compressed texture formats supported can be
obtained by querying the value of GL_COMPRESSED_TEXTURE_FORMATS.
Note, that there is nothing about GL_EXT_texture_compression_s3tc. Support for various capabilities may be implemented even though their 'pseudo-standardized' (I mean - extensionized) substitutes are not listed as supported.
You should probably query those constants (GL_NUM_COMPRESSED_TEXTURE_FORMATS and GL_COMPRESSED_TEXTURE_FORMATS) using glGetIntegerv() to learn, which compressed formats are actually supported.

Related

glTexImage2D - GL_TEXTURE_RECTANGLE_NV and compressed internal format

I have modified a legacy code (OpenGL 2.1) which uses glTexImage2D with GL_TEXTURE_RECTANGLE_NV texture target. I have noticed that when I set some compressed internal format, for example GL_COMPRESSED_RGBA_S3TC_DXT5_EXT it doesn't work with GL_TEXTURE_RECTANGLE_NV (I get a white texture). I have tested other scenarios and everything works fine, i.e. GL_TEXTURE_2D with compressed internal format, GL_TEXTURE_RECTANGLE_NV with non-compressed internal format. Does it mean that GL_TEXTURE_RECTANGLE_NV can't be used with compressed formats ?
Here's what the spec for NV_texture_rectangle extension says about compressed formats:
Can compressed texture images be specified for a rectangular texture?
RESOLUTION: The generic texture compression internal formats
introduced by ARB_texture_compression are supported for rectangular
textures because the image is not presented as compressed data and
the ARB_texture_compression extension always permits generic texture
compression internal formats to be stored in uncompressed form.
Implementations are free to support generic compression internal
formats for rectangular textures if supported but such support is
not required.
This extensions makes a blanket statement that specific compressed
internal formats for use with CompressedTexImage<n>DARB are NOT
supported for rectangular textures. This is because several
existing hardware implementations of texture compression formats
such as S3TC are not designed for compressing rectangular textures.
This does not preclude future texture compression extensions from
supporting compressed internal formats that do work with rectangular
extensions (by relaxing the current blanket error condition).
So your specific format GL_COMPRESSED_RGBA_S3TC_DXT5_EXT is not necessarily supported as being the one mentioned to be "not designed for compressing rectangular textures".

Does a conversion take place when uploading a texture of GL_ALPHA format (glTexImage2D)?

The documentation for glTexImage2D says
GL_RED (for GL) / GL_ALPHA (for GL ES). "The GL converts it to floating point and assembles it into an RGBA element by attaching 0 for green and blue, and 1 for alpha. Each component is clamped to the range [0,1]."
I've read through the GL ES specs to see if it specifies whether the GPU memory is actually 32bit vs 8bit, but it seems rather vague. Can anyone confirm whether uploading a texture as GL_RED / GL_ALPHA gets converted from 8bit to 32bit on the GPU?
I'm interested in answers for GL and GL ES.
I've read through the GL ES specs to see if it specifies whether the GPU memory is actually 32bit vs 8bit, but it seems rather vague.
Well, that's what it is. The actual details are left for the actual implementation to decide. Giving such liberties in the specification allows actual implementations to contain optimizations tightly tailored to the target system. For example a certain GPU may cope better with a 10 bits per channel format, so it's then at liberty to convert to such a format.
So it's impossible to say in general, but for a specific implementation (i.e. GPU + driver) a certain format will be likely choosen. Which one depends on GPU and driver.
Following on from what datenwolf has said, I found the following in the "POWERVR SGX
OpenGL ES 2.0 Application Development Recommendations" document:
6.3. Texture Upload
When you upload textures to the OpenGL ES driver via glTexImage2D, the input data is usually in linear scanline
format. Internally, though, POWERVR SGX uses a twiddled layout (i.e.
following a plane-filling curve) to greatly improve memory access
locality when texturing. Because of this different layout uploading
textures will always require a somewhat expensive reformatting
operation, regardless of whether the input pixel format exactly
matches the internal pixel format or not.
For this reason we
recommend that you upload all required textures at application or
level start-up time in order to not cause any framerate dips when
additional textures are uploaded later on.
You should especially
avoid uploading texture data mid-frame to a texture object that has
already been used in the frame.

how to read in a texture picture into opengl

What is the easiest format to read a texture into opengl? Are there any tutorials -good tutorials, for loading image formats like jpg, png, or raw into an array which can be used for texture mapping (preferably without the use of a library like libpng)?
OpenGL itself does not knows nothing about common image formats (other than natively supported S3TC/DXT compressed and alikes, but they are a different story). You need to expand you source images into RGBA arrays. Number of formats and combinations are supported. You need to choose one that suits you, e.g. GL_ALPHA4 for masks, GL_RGB5_A1 for 1bit transparency, GL_BGRA/GL_RGBA for fullcolor, etc.
For me the easiest (not the fastest) way are PNGs, for their lossless compression and full Alpha support. I read the PNG and write RGBA values into array which I then hand over to OpenGL texture creation. If you don't need alpha you may as well accept JPG or BMP. Pipeline is common Source -> Expanded RGBA array -> OpenGL texture.
There is a handy OpenGL texture tutorial available at the link: http://www.nullterminator.net/gltexture.html

Difference between OpenGL and D3D pixel formats

I'm continuing to try and develop an OpenGL path for my software. I'm using abstract classes with concrete implementations for both, but obviously I need a common pixel format enumerator so that I can describe texture, backbuffer/frontbuffer and render target formats between the two. I provide a function in each concrete implementation that accepts my abstract identifier for say, R8G8B8A8, and provides the concrete implementation with an enum suitable for either D3D or OpenGL
I can easily enumerate all D3D pixel formats using CheckDeviceFormat. For OpenGL, I'm firstly iterating through Win32 available accelerated formats (using DescribePixelFormat) and then looking at the PIXELFORMATDESCRIPTOR to see how it's made up, so I can assign it one of my enums. This is where my problems start:
I want to be able to discover all accelerated formats that OpenGL supports on any given system, as comparable to a D3D format. But according to the format descriptors, there aren't any RGB formats (they're all BGR). Further, things like DXT1 - 5 formats, enumerable in D3D, aren't enumerable using the above method. For the latter, I suppose I can just assume if the extension is available, it's a hardware accelerated format.
For the former (how to interpret the format descriptor in terms of RGB/BGR, etc.), I'm not too sure how it works.
Anyone know about this stuff?
Responses appreciated.
Ok, I think I found what I was looking for:
OpenGL image formats
Some image formats are defined by the spec (for backbuffer/depth-stencil, textures, render-targets, etc.), so there is a guarantee, to an extent, that these will be available (they don't need enumerating). The pixel format descriptor can still be used to work out available front buffer formats for the given window.

OpenGL Colorspace Conversion

Does anyone know how to create a texture with a YUV colorspace so that we can get hardware based YUV to RGB colorspace conversion without having to use a fragment shader? I'm using an NVidia 9400 and I don't see an obvious GL extension that seems to do the trick. I've found examples how to use a fragment shader, but the project I'm working on currently only supports OpenGL 1.1 and I don't have time to convert it to 2.0 and perform all the regression testing necessary. This is also targeting Linux. On other platforms I've been using a MESA extension but it doesn't function on the Nvidia card.
Since you're okay with using extensions, but you're worried about going all-out with OpenGL 2.0, consider providing a simple fragment shader with the old-school ARB_fragment_program extension.
Alternatively, you could use a library like DevIL, ImageMagick, or FreeImage to perform the conversion for you.
the MESA extension you mention is for YCrCb ? If your nvidia card does not expose it, it means they did not expose the support for that texture format (it's the way to say the card supports it).
Your only option is to do the color conversion outside of the texture filtering block. (prior to submitting the texture data to GL, or after getting the values out of the texture filtering block)
GL can still help, as doing the linear transform is doable in GL1.1, provided you have the right extensions (dot3 texture combiner extension). That said, it's far from the panacea.
For what it's worth, the BINK implementation seems like it's doing the conversion with the CPU, using MMX (that's reading between the lines though). I would probably do the same, converting with SSE prior to loading to OpenGL. A CPU is fast enough to do this every frame.