glTexImage2D - GL_TEXTURE_RECTANGLE_NV and compressed internal format - opengl

I have modified a legacy code (OpenGL 2.1) which uses glTexImage2D with GL_TEXTURE_RECTANGLE_NV texture target. I have noticed that when I set some compressed internal format, for example GL_COMPRESSED_RGBA_S3TC_DXT5_EXT it doesn't work with GL_TEXTURE_RECTANGLE_NV (I get a white texture). I have tested other scenarios and everything works fine, i.e. GL_TEXTURE_2D with compressed internal format, GL_TEXTURE_RECTANGLE_NV with non-compressed internal format. Does it mean that GL_TEXTURE_RECTANGLE_NV can't be used with compressed formats ?

Here's what the spec for NV_texture_rectangle extension says about compressed formats:
Can compressed texture images be specified for a rectangular texture?
RESOLUTION: The generic texture compression internal formats
introduced by ARB_texture_compression are supported for rectangular
textures because the image is not presented as compressed data and
the ARB_texture_compression extension always permits generic texture
compression internal formats to be stored in uncompressed form.
Implementations are free to support generic compression internal
formats for rectangular textures if supported but such support is
not required.
This extensions makes a blanket statement that specific compressed
internal formats for use with CompressedTexImage<n>DARB are NOT
supported for rectangular textures. This is because several
existing hardware implementations of texture compression formats
such as S3TC are not designed for compressing rectangular textures.
This does not preclude future texture compression extensions from
supporting compressed internal formats that do work with rectangular
extensions (by relaxing the current blanket error condition).
So your specific format GL_COMPRESSED_RGBA_S3TC_DXT5_EXT is not necessarily supported as being the one mentioned to be "not designed for compressing rectangular textures".

Related

Lossless texture compression for OpenGL

I have several 32-bit(with alpha channel) bitmap images which I'm using as essential information in my game. Slightest change in RGBA values breaks everything, so I can't use lossy compression methods like S3TC.
Is there any feasible lossless compression algorithms I can use with OpenGL? I'm using fragment shaders and I want to use the glCompressedTexImage2D() method to define the texture. I haven't tried compressing the texture with OpenGL using GL_COMPRESSED_RGBA parameter, is there any chance I can get lossless compression that way?
Texture compression, as opposed to regular image compression, is designed for one specific purpose: being a texture. And that means fast random access of data.
Lossless compression formats do not tend to do well when it comes to random access patterns. The major lossless compression formats are some form of RLE or table-based encoding. These are adequate for decompressing the entire dataset at once, but they're terrible at being able to know in which memory location the value for texel (U,V) is.
And that question gets asked a lot when accessing textures.
As such, there are no lossless hardware texture compression formats.
Your options are limited to the following:
Use texture memory as a kind of cache. That is, when you determine that you will need a particular image in this frame, decompress it. This could be done on the CPU or GPU (via compute shaders or the like). Note that for fast GPU decompression, you will have to come up with a compression scheme that takes advantage of parallel execution. Most lossless compression formats are not particularly parallel.
If a particular image has not been used in some time, you put it in a "subject to be reused" pile. And if you need to decompress a new image, you can take the least-recently-used image off of that pile, rather than constantly creating/destroying OpenGL texture objects.
Build your own lossless compression scheme, designed for your specific needs. If you absolutely need exact texel values from the texture, I assume that you aren't using linear filtering when accessing these textures. So these aren't really colors; they're arbitrary information about a texel.
I might suggest field compression (improved packing of your bits in the available space). But without knowing what your data actually is or means, I can't say whether your particular use case is amenable to it.

Frostbite PBR: different compression on separate texture channels of the same texture

Is it possible, and if yes how would it be possible, to create and read a RGBA Texture with different compression algorithms on separate channels in OpenGL4.x:
Example A without real meaning:
RG channels stores a normal map encoded in 3Dc
B channel stores height values for lets say tesselation with some encoding
A channel stores raw MaterialID without compression
Example B:
RGB stores material parameters compressed with DXT1
A saves MaterialID without compression
Background:
In the Frostbite implementation of Physically Based Rendering PBR (PDF) on page 15 and 18, the authors are describing how they structured material parameters into different texture channels. They also mention that they avoided texture compression on some channels not going into detail which channels they mean by that.
Page 15
All basic attributes (Normal, BaseColor, Smoothness, MetalMask, Reflectance) need to be blendable to support deferred decals. Unblendable attributes, like MaterialId are stored into the alpha channel. We have also avoided compression and encoding mechanisms which could affect blending quality.
Page 18
We chose to avoid compression on few of our attributes and rely on simple, linearly-interpolating alpha blending in this case.
There is no OpenGL or Hardware support for reading in a texture with different compression on different channels. Of course, one could create a prepass which would split such a handcrafted texture into separate textures and then decompress these separately.
As Nicol Bolas suggested, the Frostbite PBR implementation avoids compression due to blending.

DXT Texture working despite S3TC not being supported

The topic involves OpenGL ES 2.0.
I have a device that when queried on the extensions via
glGetString(GL_EXTENSIONS)
Returns a list of supported extensions, none of which is GL_EXT_texture_compression_s3tc .
AFAIK , not haveing GL_EXT_texture_compression_s3tc shouldn't allow using DXT compressed textures.
However, when DXT compressed texture are used on the device , they render without any problems.
Texture data is commited using glCompressedTexImage2D .
Tried for DXT1 , DXT3 and DXT5 .
Why does it work ? Is it safe to use a texture compression although the compression seems to not be supported ?
I think, that missing support for GL_EXT_texture_compression_s3tc does not mean, that you can't use compressed formats. They may be supported anyway.
Quote from glCompressedTexImage2D doc page for ES2:
The texture image is decoded according to the extension specification
defining the specified internalformat. OpenGL ES (...) provide a
mechanism to obtain symbolic constants for such formats provided by
extensions. The number of compressed texture formats supported can be
obtained by querying the value of GL_NUM_COMPRESSED_TEXTURE_FORMATS.
The list of specific compressed texture formats supported can be
obtained by querying the value of GL_COMPRESSED_TEXTURE_FORMATS.
Note, that there is nothing about GL_EXT_texture_compression_s3tc. Support for various capabilities may be implemented even though their 'pseudo-standardized' (I mean - extensionized) substitutes are not listed as supported.
You should probably query those constants (GL_NUM_COMPRESSED_TEXTURE_FORMATS and GL_COMPRESSED_TEXTURE_FORMATS) using glGetIntegerv() to learn, which compressed formats are actually supported.

Compressed Textures in OpenGL

I have read that compressed textures are not readable and are not color render-able.
Though I have some idea of why its not allowed, can some one explain in little detail.
What exactly does it mean its not readable. I can not read from them in shader using say image Load etc? Or I cant even sample from them?
What does it mean its not render-able to? Is it because user is going to see all garbage anyway, so its not allowed.
I have not tried using compressed textures.
Compressed textures are "readable", by most useful definitions of that term. You can read from them via samplers. However, you can't use imageLoad operations on them. Why? Because reading such memory is not a simple memory fetch. It involves fetching lots of memory and doing a decompression operation.
Compressed images are not color-renderable, which means they cannot be attached to an FBO and used as a render target. One might think the reason for this was obvious, but if you need it spelled out. Writing to a compressed image requires doing image compression on the fly. And most texture compression formats (or compressed formats of any kind) are not designed to easily deal with changing a few values. Not to mention, most compressed texture formats are lossy, so every time you do a decompress/write/recompress operation, you lose image fidelity.
From the OpenGL Wiki:
Despite being color formats, compressed images are not color-renderable, for obvious reasons. Therefore, attaching a compressed image to a framebuffer object will cause that FBO to be incomplete and thus unusable. For similar reasons, no compressed formats can be used as the internal format of renderbuffers.
So "not color render-able" means that they can't be used in FBOs.
I'm not sure what "not readable" means; it may mean that you can't bind them to an FBO and read from the FBO (since you can't bind them to an FBO in the first place).

how to read in a texture picture into opengl

What is the easiest format to read a texture into opengl? Are there any tutorials -good tutorials, for loading image formats like jpg, png, or raw into an array which can be used for texture mapping (preferably without the use of a library like libpng)?
OpenGL itself does not knows nothing about common image formats (other than natively supported S3TC/DXT compressed and alikes, but they are a different story). You need to expand you source images into RGBA arrays. Number of formats and combinations are supported. You need to choose one that suits you, e.g. GL_ALPHA4 for masks, GL_RGB5_A1 for 1bit transparency, GL_BGRA/GL_RGBA for fullcolor, etc.
For me the easiest (not the fastest) way are PNGs, for their lossless compression and full Alpha support. I read the PNG and write RGBA values into array which I then hand over to OpenGL texture creation. If you don't need alpha you may as well accept JPG or BMP. Pipeline is common Source -> Expanded RGBA array -> OpenGL texture.
There is a handy OpenGL texture tutorial available at the link: http://www.nullterminator.net/gltexture.html